I0318 10:46:44.093424 6 e2e.go:224] Starting e2e run "c1d8ddb4-6905-11ea-9856-0242ac11000f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584528403 - Will randomize all specs Will run 201 of 2164 specs Mar 18 10:46:44.289: INFO: >>> kubeConfig: /root/.kube/config Mar 18 10:46:44.292: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 18 10:46:44.307: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 18 10:46:44.344: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 18 10:46:44.344: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 18 10:46:44.344: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 18 10:46:44.352: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 18 10:46:44.352: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 18 10:46:44.352: INFO: e2e test version: v1.13.12 Mar 18 10:46:44.353: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:46:44.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Mar 18 10:46:44.478: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c25c467f-6905-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 10:46:44.503: INFO: Waiting up to 5m0s for pod "pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-98wmc" to be "success or failure" Mar 18 10:46:44.514: INFO: Pod "pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.956746ms Mar 18 10:46:46.518: INFO: Pod "pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014739275s Mar 18 10:46:48.521: INFO: Pod "pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018269503s STEP: Saw pod success Mar 18 10:46:48.521: INFO: Pod "pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:46:48.524: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f container secret-env-test: STEP: delete the pod Mar 18 10:46:48.562: INFO: Waiting for pod pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f to disappear Mar 18 10:46:48.573: INFO: Pod pod-secrets-c25cc795-6905-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:46:48.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-98wmc" for this suite. Mar 18 10:46:54.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:46:54.656: INFO: namespace: e2e-tests-secrets-98wmc, resource: bindings, ignored listing per whitelist Mar 18 10:46:54.671: INFO: namespace e2e-tests-secrets-98wmc deletion completed in 6.09437831s • [SLOW TEST:10.317 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:46:54.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c87fe597-6905-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 10:46:54.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-rmwwj" to be "success or failure" Mar 18 10:46:54.807: INFO: Pod "pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.915144ms Mar 18 10:46:56.811: INFO: Pod "pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007956903s Mar 18 10:46:58.815: INFO: Pod "pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012467932s STEP: Saw pod success Mar 18 10:46:58.815: INFO: Pod "pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:46:58.819: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 10:46:58.839: INFO: Waiting for pod pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f to disappear Mar 18 10:46:58.843: INFO: Pod pod-configmaps-c8823d24-6905-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:46:58.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rmwwj" for this suite. Mar 18 10:47:04.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:47:04.900: INFO: namespace: e2e-tests-configmap-rmwwj, resource: bindings, ignored listing per whitelist Mar 18 10:47:04.943: INFO: namespace e2e-tests-configmap-rmwwj deletion completed in 6.097273829s • [SLOW TEST:10.272 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:47:04.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tnx8q STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 10:47:05.037: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 10:47:27.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.231:8080/dial?request=hostName&protocol=http&host=10.244.2.166&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-tnx8q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 10:47:27.119: INFO: >>> kubeConfig: /root/.kube/config I0318 10:47:27.162048 6 log.go:172] (0xc00079f1e0) (0xc000355f40) Create stream I0318 10:47:27.162074 6 log.go:172] (0xc00079f1e0) (0xc000355f40) Stream added, broadcasting: 1 I0318 10:47:27.164095 6 log.go:172] (0xc00079f1e0) Reply frame received for 1 I0318 10:47:27.164151 6 log.go:172] (0xc00079f1e0) (0xc000ee2140) Create stream I0318 10:47:27.164171 6 log.go:172] (0xc00079f1e0) (0xc000ee2140) Stream added, broadcasting: 3 I0318 10:47:27.165222 6 log.go:172] (0xc00079f1e0) Reply frame received for 3 I0318 10:47:27.165254 6 log.go:172] (0xc00079f1e0) (0xc0005140a0) Create stream I0318 10:47:27.165269 6 log.go:172] (0xc00079f1e0) (0xc0005140a0) Stream added, broadcasting: 5 I0318 10:47:27.166128 6 log.go:172] (0xc00079f1e0) Reply frame received for 5 I0318 10:47:27.250666 6 log.go:172] (0xc00079f1e0) Data frame received for 3 I0318 10:47:27.250708 6 log.go:172] (0xc000ee2140) (3) Data frame handling I0318 10:47:27.250744 6 log.go:172] (0xc000ee2140) (3) Data frame sent I0318 10:47:27.251178 6 log.go:172] (0xc00079f1e0) Data frame received for 5 I0318 10:47:27.251194 6 log.go:172] (0xc0005140a0) (5) Data frame handling I0318 10:47:27.251464 6 log.go:172] (0xc00079f1e0) Data frame received for 3 I0318 10:47:27.251485 6 log.go:172] (0xc000ee2140) (3) Data frame handling I0318 10:47:27.252649 6 log.go:172] (0xc00079f1e0) Data frame received for 1 I0318 10:47:27.252682 6 log.go:172] (0xc000355f40) (1) Data frame handling I0318 10:47:27.252703 6 log.go:172] (0xc000355f40) (1) Data frame sent I0318 10:47:27.252721 6 log.go:172] (0xc00079f1e0) (0xc000355f40) Stream removed, broadcasting: 1 I0318 10:47:27.252736 6 log.go:172] (0xc00079f1e0) Go away received I0318 10:47:27.252901 6 log.go:172] (0xc00079f1e0) (0xc000355f40) Stream removed, broadcasting: 1 I0318 10:47:27.252930 6 log.go:172] (0xc00079f1e0) (0xc000ee2140) Stream removed, broadcasting: 3 I0318 10:47:27.252942 6 log.go:172] (0xc00079f1e0) (0xc0005140a0) Stream removed, broadcasting: 5 Mar 18 10:47:27.252: INFO: Waiting for endpoints: map[] Mar 18 10:47:27.255: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.231:8080/dial?request=hostName&protocol=http&host=10.244.1.230&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-tnx8q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 10:47:27.255: INFO: >>> kubeConfig: /root/.kube/config I0318 10:47:27.283224 6 log.go:172] (0xc00079f6b0) (0xc0003bf5e0) Create stream I0318 10:47:27.283252 6 log.go:172] (0xc00079f6b0) (0xc0003bf5e0) Stream added, broadcasting: 1 I0318 10:47:27.285890 6 log.go:172] (0xc00079f6b0) Reply frame received for 1 I0318 10:47:27.285945 6 log.go:172] (0xc00079f6b0) (0xc000194780) Create stream I0318 10:47:27.285961 6 log.go:172] (0xc00079f6b0) (0xc000194780) Stream added, broadcasting: 3 I0318 10:47:27.287083 6 log.go:172] (0xc00079f6b0) Reply frame received for 3 I0318 10:47:27.287139 6 log.go:172] (0xc00079f6b0) (0xc0003bfb80) Create stream I0318 10:47:27.287156 6 log.go:172] (0xc00079f6b0) (0xc0003bfb80) Stream added, broadcasting: 5 I0318 10:47:27.288125 6 log.go:172] (0xc00079f6b0) Reply frame received for 5 I0318 10:47:27.350815 6 log.go:172] (0xc00079f6b0) Data frame received for 3 I0318 10:47:27.350839 6 log.go:172] (0xc000194780) (3) Data frame handling I0318 10:47:27.350853 6 log.go:172] (0xc000194780) (3) Data frame sent I0318 10:47:27.351431 6 log.go:172] (0xc00079f6b0) Data frame received for 3 I0318 10:47:27.351474 6 log.go:172] (0xc000194780) (3) Data frame handling I0318 10:47:27.351497 6 log.go:172] (0xc00079f6b0) Data frame received for 5 I0318 10:47:27.351507 6 log.go:172] (0xc0003bfb80) (5) Data frame handling I0318 10:47:27.353442 6 log.go:172] (0xc00079f6b0) Data frame received for 1 I0318 10:47:27.353464 6 log.go:172] (0xc0003bf5e0) (1) Data frame handling I0318 10:47:27.353492 6 log.go:172] (0xc0003bf5e0) (1) Data frame sent I0318 10:47:27.353511 6 log.go:172] (0xc00079f6b0) (0xc0003bf5e0) Stream removed, broadcasting: 1 I0318 10:47:27.353581 6 log.go:172] (0xc00079f6b0) Go away received I0318 10:47:27.353634 6 log.go:172] (0xc00079f6b0) (0xc0003bf5e0) Stream removed, broadcasting: 1 I0318 10:47:27.353674 6 log.go:172] (0xc00079f6b0) (0xc000194780) Stream removed, broadcasting: 3 I0318 10:47:27.353697 6 log.go:172] (0xc00079f6b0) (0xc0003bfb80) Stream removed, broadcasting: 5 Mar 18 10:47:27.353: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:47:27.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-tnx8q" for this suite. Mar 18 10:47:49.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:47:49.419: INFO: namespace: e2e-tests-pod-network-test-tnx8q, resource: bindings, ignored listing per whitelist Mar 18 10:47:49.471: INFO: namespace e2e-tests-pod-network-test-tnx8q deletion completed in 22.11308795s • [SLOW TEST:44.528 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:47:49.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 10:47:49.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-p7896' Mar 18 10:47:51.410: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 10:47:51.410: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 18 10:47:55.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-p7896' Mar 18 10:47:55.637: INFO: stderr: "" Mar 18 10:47:55.637: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:47:55.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p7896" for this suite. Mar 18 10:48:01.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:48:01.674: INFO: namespace: e2e-tests-kubectl-p7896, resource: bindings, ignored listing per whitelist Mar 18 10:48:01.746: INFO: namespace e2e-tests-kubectl-p7896 deletion completed in 6.104888903s • [SLOW TEST:12.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:48:01.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 18 10:48:01.869: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 10:48:01.877: INFO: Waiting for terminating namespaces to be deleted... Mar 18 10:48:01.879: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 18 10:48:01.885: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 18 10:48:01.885: INFO: Container coredns ready: true, restart count 0 Mar 18 10:48:01.885: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 18 10:48:01.885: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 10:48:01.885: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 10:48:01.885: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 10:48:01.885: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 18 10:48:01.889: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 10:48:01.889: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 10:48:01.889: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 18 10:48:01.889: INFO: Container coredns ready: true, restart count 0 Mar 18 10:48:01.889: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 10:48:01.889: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f2f66d70-6905-11ea-9856-0242ac11000f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f2f66d70-6905-11ea-9856-0242ac11000f off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f2f66d70-6905-11ea-9856-0242ac11000f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:48:10.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-mp5gz" for this suite. Mar 18 10:48:24.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:48:24.212: INFO: namespace: e2e-tests-sched-pred-mp5gz, resource: bindings, ignored listing per whitelist Mar 18 10:48:24.221: INFO: namespace e2e-tests-sched-pred-mp5gz deletion completed in 14.092033316s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.475 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:48:24.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 10:48:24.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 18 10:48:24.377: INFO: stderr: "" Mar 18 10:48:24.377: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 18 10:48:24.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b45jh' Mar 18 10:48:24.652: INFO: stderr: "" Mar 18 10:48:24.652: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 18 10:48:24.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b45jh' Mar 18 10:48:24.905: INFO: stderr: "" Mar 18 10:48:24.905: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 18 10:48:25.909: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:48:25.909: INFO: Found 0 / 1 Mar 18 10:48:26.910: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:48:26.910: INFO: Found 0 / 1 Mar 18 10:48:27.910: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:48:27.910: INFO: Found 1 / 1 Mar 18 10:48:27.910: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 10:48:27.914: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:48:27.914: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 10:48:27.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-mwcd4 --namespace=e2e-tests-kubectl-b45jh' Mar 18 10:48:28.028: INFO: stderr: "" Mar 18 10:48:28.028: INFO: stdout: "Name: redis-master-mwcd4\nNamespace: e2e-tests-kubectl-b45jh\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Wed, 18 Mar 2020 10:48:24 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.232\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://f2203f12f9c3c466fb9bf1df728b2488200bf6c7a4fae29263418da31699a3da\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 18 Mar 2020 10:48:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-rk98j (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-rk98j:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-rk98j\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-b45jh/redis-master-mwcd4 to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Mar 18 10:48:28.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-b45jh' Mar 18 10:48:28.160: INFO: stderr: "" Mar 18 10:48:28.160: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-b45jh\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-mwcd4\n" Mar 18 10:48:28.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-b45jh' Mar 18 10:48:28.279: INFO: stderr: "" Mar 18 10:48:28.279: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-b45jh\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.104.51.196\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.232:6379\nSession Affinity: None\nEvents: \n" Mar 18 10:48:28.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 18 10:48:28.435: INFO: stderr: "" Mar 18 10:48:28.435: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 18 Mar 2020 10:48:25 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 18 Mar 2020 10:48:25 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 18 Mar 2020 10:48:25 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 18 Mar 2020 10:48:25 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d16h\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d16h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d16h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d16h\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d16h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d16h\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d16h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 18 10:48:28.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-b45jh' Mar 18 10:48:28.541: INFO: stderr: "" Mar 18 10:48:28.541: INFO: stdout: "Name: e2e-tests-kubectl-b45jh\nLabels: e2e-framework=kubectl\n e2e-run=c1d8ddb4-6905-11ea-9856-0242ac11000f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:48:28.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b45jh" for this suite. Mar 18 10:48:50.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:48:50.584: INFO: namespace: e2e-tests-kubectl-b45jh, resource: bindings, ignored listing per whitelist Mar 18 10:48:50.639: INFO: namespace e2e-tests-kubectl-b45jh deletion completed in 22.094329042s • [SLOW TEST:26.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:48:50.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 10:48:50.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-blrx2" to be "success or failure" Mar 18 10:48:50.789: INFO: Pod "downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.902391ms Mar 18 10:48:52.792: INFO: Pod "downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036074055s Mar 18 10:48:54.796: INFO: Pod "downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040398305s STEP: Saw pod success Mar 18 10:48:54.796: INFO: Pod "downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:48:54.799: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 10:48:54.847: INFO: Waiting for pod downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:48:54.852: INFO: Pod downwardapi-volume-0d9bfb86-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:48:54.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-blrx2" for this suite. Mar 18 10:49:00.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:49:00.947: INFO: namespace: e2e-tests-projected-blrx2, resource: bindings, ignored listing per whitelist Mar 18 10:49:00.962: INFO: namespace e2e-tests-projected-blrx2 deletion completed in 6.105305397s • [SLOW TEST:10.323 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:49:00.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 10:49:01.067: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:49:02.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-bflzg" for this suite. Mar 18 10:49:08.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:49:08.218: INFO: namespace: e2e-tests-custom-resource-definition-bflzg, resource: bindings, ignored listing per whitelist Mar 18 10:49:08.221: INFO: namespace e2e-tests-custom-resource-definition-bflzg deletion completed in 6.093029433s • [SLOW TEST:7.259 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:49:08.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lcnt5 Mar 18 10:49:12.329: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lcnt5 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 10:49:12.332: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:53:13.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lcnt5" for this suite. Mar 18 10:53:19.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:53:19.031: INFO: namespace: e2e-tests-container-probe-lcnt5, resource: bindings, ignored listing per whitelist Mar 18 10:53:19.098: INFO: namespace e2e-tests-container-probe-lcnt5 deletion completed in 6.083410794s • [SLOW TEST:250.877 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:53:19.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wqcfj in namespace e2e-tests-proxy-jdxmt I0318 10:53:19.252760 6 runners.go:184] Created replication controller with name: proxy-service-wqcfj, namespace: e2e-tests-proxy-jdxmt, replica count: 1 I0318 10:53:20.303275 6 runners.go:184] proxy-service-wqcfj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 10:53:21.303522 6 runners.go:184] proxy-service-wqcfj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 10:53:22.303779 6 runners.go:184] proxy-service-wqcfj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 10:53:23.304026 6 runners.go:184] proxy-service-wqcfj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 10:53:24.304269 6 runners.go:184] proxy-service-wqcfj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 10:53:24.307: INFO: setup took 5.120308079s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 18 10:53:24.314: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jdxmt/pods/proxy-service-wqcfj-nzzpz:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b673459b-6906-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 10:53:34.041: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-nr4h8" to be "success or failure" Mar 18 10:53:34.047: INFO: Pod "pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268357ms Mar 18 10:53:36.062: INFO: Pod "pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021462061s Mar 18 10:53:38.067: INFO: Pod "pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025840801s STEP: Saw pod success Mar 18 10:53:38.067: INFO: Pod "pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:53:38.070: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 10:53:38.088: INFO: Waiting for pod pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:53:38.092: INFO: Pod pod-projected-configmaps-b673fd3d-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:53:38.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nr4h8" for this suite. Mar 18 10:53:44.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:53:44.134: INFO: namespace: e2e-tests-projected-nr4h8, resource: bindings, ignored listing per whitelist Mar 18 10:53:44.191: INFO: namespace e2e-tests-projected-nr4h8 deletion completed in 6.097321377s • [SLOW TEST:10.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:53:44.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 10:53:44.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-dlvn5" to be "success or failure" Mar 18 10:53:44.290: INFO: Pod "downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362558ms Mar 18 10:53:46.294: INFO: Pod "downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008349995s Mar 18 10:53:48.298: INFO: Pod "downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012126321s STEP: Saw pod success Mar 18 10:53:48.298: INFO: Pod "downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:53:48.301: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 10:53:48.331: INFO: Waiting for pod downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:53:48.340: INFO: Pod downwardapi-volume-bc9371d0-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:53:48.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dlvn5" for this suite. Mar 18 10:53:54.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:53:54.365: INFO: namespace: e2e-tests-projected-dlvn5, resource: bindings, ignored listing per whitelist Mar 18 10:53:54.451: INFO: namespace e2e-tests-projected-dlvn5 deletion completed in 6.107632737s • [SLOW TEST:10.259 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:53:54.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 18 10:53:54.532: INFO: namespace e2e-tests-kubectl-q5t52 Mar 18 10:53:54.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q5t52' Mar 18 10:53:54.799: INFO: stderr: "" Mar 18 10:53:54.799: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 18 10:53:55.804: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:53:55.804: INFO: Found 0 / 1 Mar 18 10:53:56.817: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:53:56.817: INFO: Found 0 / 1 Mar 18 10:53:57.802: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:53:57.802: INFO: Found 1 / 1 Mar 18 10:53:57.802: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 10:53:57.805: INFO: Selector matched 1 pods for map[app:redis] Mar 18 10:53:57.805: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 10:53:57.805: INFO: wait on redis-master startup in e2e-tests-kubectl-q5t52 Mar 18 10:53:57.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ff5nf redis-master --namespace=e2e-tests-kubectl-q5t52' Mar 18 10:53:57.928: INFO: stderr: "" Mar 18 10:53:57.928: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Mar 10:53:56.928 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Mar 10:53:56.928 # Server started, Redis version 3.2.12\n1:M 18 Mar 10:53:56.928 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Mar 10:53:56.928 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 18 10:53:57.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-q5t52' Mar 18 10:53:58.081: INFO: stderr: "" Mar 18 10:53:58.081: INFO: stdout: "service/rm2 exposed\n" Mar 18 10:53:58.099: INFO: Service rm2 in namespace e2e-tests-kubectl-q5t52 found. STEP: exposing service Mar 18 10:54:00.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-q5t52' Mar 18 10:54:00.284: INFO: stderr: "" Mar 18 10:54:00.284: INFO: stdout: "service/rm3 exposed\n" Mar 18 10:54:00.288: INFO: Service rm3 in namespace e2e-tests-kubectl-q5t52 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:54:02.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q5t52" for this suite. Mar 18 10:54:24.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:54:24.367: INFO: namespace: e2e-tests-kubectl-q5t52, resource: bindings, ignored listing per whitelist Mar 18 10:54:24.413: INFO: namespace e2e-tests-kubectl-q5t52 deletion completed in 22.115129945s • [SLOW TEST:29.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:54:24.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 18 10:54:24.512: INFO: Waiting up to 5m0s for pod "downward-api-d48edc21-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-sn7zw" to be "success or failure" Mar 18 10:54:24.532: INFO: Pod "downward-api-d48edc21-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.475102ms Mar 18 10:54:26.538: INFO: Pod "downward-api-d48edc21-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025598818s Mar 18 10:54:28.542: INFO: Pod "downward-api-d48edc21-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02992291s STEP: Saw pod success Mar 18 10:54:28.542: INFO: Pod "downward-api-d48edc21-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:54:28.546: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d48edc21-6906-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 10:54:28.577: INFO: Waiting for pod downward-api-d48edc21-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:54:28.587: INFO: Pod downward-api-d48edc21-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:54:28.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sn7zw" for this suite. Mar 18 10:54:34.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:54:34.647: INFO: namespace: e2e-tests-downward-api-sn7zw, resource: bindings, ignored listing per whitelist Mar 18 10:54:34.679: INFO: namespace e2e-tests-downward-api-sn7zw deletion completed in 6.087717469s • [SLOW TEST:10.266 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:54:34.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 18 10:54:34.770: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:54:38.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-hrs6n" for this suite. Mar 18 10:54:44.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:54:44.982: INFO: namespace: e2e-tests-init-container-hrs6n, resource: bindings, ignored listing per whitelist Mar 18 10:54:45.030: INFO: namespace e2e-tests-init-container-hrs6n deletion completed in 6.116632892s • [SLOW TEST:10.350 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:54:45.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 18 10:54:45.127: INFO: Waiting up to 5m0s for pod "downward-api-e0d72532-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-cjc9m" to be "success or failure" Mar 18 10:54:45.130: INFO: Pod "downward-api-e0d72532-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.512047ms Mar 18 10:54:47.134: INFO: Pod "downward-api-e0d72532-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007747438s Mar 18 10:54:49.138: INFO: Pod "downward-api-e0d72532-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011688992s STEP: Saw pod success Mar 18 10:54:49.138: INFO: Pod "downward-api-e0d72532-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:54:49.141: INFO: Trying to get logs from node hunter-worker2 pod downward-api-e0d72532-6906-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 10:54:49.178: INFO: Waiting for pod downward-api-e0d72532-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:54:49.195: INFO: Pod downward-api-e0d72532-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:54:49.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cjc9m" for this suite. Mar 18 10:54:55.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:54:55.259: INFO: namespace: e2e-tests-downward-api-cjc9m, resource: bindings, ignored listing per whitelist Mar 18 10:54:55.287: INFO: namespace e2e-tests-downward-api-cjc9m deletion completed in 6.089236526s • [SLOW TEST:10.257 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:54:55.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e6f655b9-6906-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 10:54:55.402: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-48pt6" to be "success or failure" Mar 18 10:54:55.406: INFO: Pod "pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877899ms Mar 18 10:54:57.408: INFO: Pod "pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006246904s Mar 18 10:54:59.420: INFO: Pod "pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017785781s STEP: Saw pod success Mar 18 10:54:59.420: INFO: Pod "pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:54:59.423: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 10:54:59.462: INFO: Waiting for pod pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:54:59.474: INFO: Pod pod-configmaps-e6f7dcb5-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:54:59.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-48pt6" for this suite. Mar 18 10:55:05.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:55:05.540: INFO: namespace: e2e-tests-configmap-48pt6, resource: bindings, ignored listing per whitelist Mar 18 10:55:05.565: INFO: namespace e2e-tests-configmap-48pt6 deletion completed in 6.086763572s • [SLOW TEST:10.277 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:55:05.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ed12e1b4-6906-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 10:55:05.656: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-mhp2b" to be "success or failure" Mar 18 10:55:05.673: INFO: Pod "pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761749ms Mar 18 10:55:07.678: INFO: Pod "pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022194409s Mar 18 10:55:09.681: INFO: Pod "pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02576781s STEP: Saw pod success Mar 18 10:55:09.682: INFO: Pod "pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:55:09.684: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 10:55:09.718: INFO: Waiting for pod pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:55:09.723: INFO: Pod pod-projected-configmaps-ed146e0a-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:55:09.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mhp2b" for this suite. Mar 18 10:55:15.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:55:15.820: INFO: namespace: e2e-tests-projected-mhp2b, resource: bindings, ignored listing per whitelist Mar 18 10:55:15.822: INFO: namespace e2e-tests-projected-mhp2b deletion completed in 6.095077009s • [SLOW TEST:10.257 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:55:15.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 10:55:15.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-8dsbk" to be "success or failure" Mar 18 10:55:15.934: INFO: Pod "downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.360288ms Mar 18 10:55:17.938: INFO: Pod "downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020346797s Mar 18 10:55:19.943: INFO: Pod "downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024685174s STEP: Saw pod success Mar 18 10:55:19.943: INFO: Pod "downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:55:19.946: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 10:55:19.990: INFO: Waiting for pod downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f to disappear Mar 18 10:55:20.040: INFO: Pod downwardapi-volume-f3328d43-6906-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:55:20.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8dsbk" for this suite. Mar 18 10:55:26.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:55:26.129: INFO: namespace: e2e-tests-downward-api-8dsbk, resource: bindings, ignored listing per whitelist Mar 18 10:55:26.134: INFO: namespace e2e-tests-downward-api-8dsbk deletion completed in 6.090294827s • [SLOW TEST:10.311 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:55:26.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-f9622790-6906-11ea-9856-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-f96227ec-6906-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f9622790-6906-11ea-9856-0242ac11000f STEP: Updating configmap cm-test-opt-upd-f96227ec-6906-11ea-9856-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-f9622814-6906-11ea-9856-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:56:38.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k25nr" for this suite. Mar 18 10:57:00.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:57:00.787: INFO: namespace: e2e-tests-projected-k25nr, resource: bindings, ignored listing per whitelist Mar 18 10:57:00.815: INFO: namespace e2e-tests-projected-k25nr deletion completed in 22.097143267s • [SLOW TEST:94.681 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:57:00.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 10:57:00.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-hhzfn" to be "success or failure" Mar 18 10:57:00.946: INFO: Pod "downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.095515ms Mar 18 10:57:02.950: INFO: Pod "downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023169374s Mar 18 10:57:04.955: INFO: Pod "downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027626345s STEP: Saw pod success Mar 18 10:57:04.955: INFO: Pod "downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:57:04.958: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 10:57:04.974: INFO: Waiting for pod downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f to disappear Mar 18 10:57:04.999: INFO: Pod downwardapi-volume-31c752f6-6907-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:57:04.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hhzfn" for this suite. Mar 18 10:57:11.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:57:11.039: INFO: namespace: e2e-tests-downward-api-hhzfn, resource: bindings, ignored listing per whitelist Mar 18 10:57:11.098: INFO: namespace e2e-tests-downward-api-hhzfn deletion completed in 6.095309773s • [SLOW TEST:10.283 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:57:11.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 18 10:57:11.212: INFO: Waiting up to 5m0s for pod "pod-37e95c27-6907-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-kn44n" to be "success or failure" Mar 18 10:57:11.223: INFO: Pod "pod-37e95c27-6907-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.82587ms Mar 18 10:57:13.227: INFO: Pod "pod-37e95c27-6907-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014704166s Mar 18 10:57:15.231: INFO: Pod "pod-37e95c27-6907-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018656428s STEP: Saw pod success Mar 18 10:57:15.231: INFO: Pod "pod-37e95c27-6907-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:57:15.234: INFO: Trying to get logs from node hunter-worker pod pod-37e95c27-6907-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 10:57:15.260: INFO: Waiting for pod pod-37e95c27-6907-11ea-9856-0242ac11000f to disappear Mar 18 10:57:15.265: INFO: Pod pod-37e95c27-6907-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:57:15.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kn44n" for this suite. Mar 18 10:57:21.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:57:21.328: INFO: namespace: e2e-tests-emptydir-kn44n, resource: bindings, ignored listing per whitelist Mar 18 10:57:21.359: INFO: namespace e2e-tests-emptydir-kn44n deletion completed in 6.091835122s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:57:21.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 18 10:57:21.472: INFO: Waiting up to 5m0s for pod "pod-3e06cd2a-6907-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-zrt22" to be "success or failure" Mar 18 10:57:21.494: INFO: Pod "pod-3e06cd2a-6907-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.96823ms Mar 18 10:57:23.563: INFO: Pod "pod-3e06cd2a-6907-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091431413s Mar 18 10:57:25.567: INFO: Pod "pod-3e06cd2a-6907-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095769221s STEP: Saw pod success Mar 18 10:57:25.567: INFO: Pod "pod-3e06cd2a-6907-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 10:57:25.571: INFO: Trying to get logs from node hunter-worker pod pod-3e06cd2a-6907-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 10:57:25.636: INFO: Waiting for pod pod-3e06cd2a-6907-11ea-9856-0242ac11000f to disappear Mar 18 10:57:25.643: INFO: Pod pod-3e06cd2a-6907-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:57:25.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zrt22" for this suite. Mar 18 10:57:31.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:57:31.693: INFO: namespace: e2e-tests-emptydir-zrt22, resource: bindings, ignored listing per whitelist Mar 18 10:57:31.736: INFO: namespace e2e-tests-emptydir-zrt22 deletion completed in 6.088741611s • [SLOW TEST:10.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:57:31.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0318 10:58:02.380933 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 10:58:02.380: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:58:02.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5d9l7" for this suite. Mar 18 10:58:08.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:58:08.474: INFO: namespace: e2e-tests-gc-5d9l7, resource: bindings, ignored listing per whitelist Mar 18 10:58:08.476: INFO: namespace e2e-tests-gc-5d9l7 deletion completed in 6.091640106s • [SLOW TEST:36.740 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:58:08.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 18 10:58:08.551: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 10:58:08.576: INFO: Waiting for terminating namespaces to be deleted... Mar 18 10:58:08.578: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 18 10:58:08.584: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 18 10:58:08.584: INFO: Container coredns ready: true, restart count 0 Mar 18 10:58:08.584: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 18 10:58:08.584: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 10:58:08.584: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 10:58:08.584: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 10:58:08.584: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 18 10:58:08.614: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 10:58:08.614: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 10:58:08.614: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 18 10:58:08.614: INFO: Container coredns ready: true, restart count 0 Mar 18 10:58:08.614: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 10:58:08.614: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fd60cf9801fd38], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:58:09.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xzjlf" for this suite. Mar 18 10:58:15.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:58:15.675: INFO: namespace: e2e-tests-sched-pred-xzjlf, resource: bindings, ignored listing per whitelist Mar 18 10:58:15.721: INFO: namespace e2e-tests-sched-pred-xzjlf deletion completed in 6.08435922s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.245 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:58:15.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-vt8b STEP: Creating a pod to test atomic-volume-subpath Mar 18 10:58:15.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vt8b" in namespace "e2e-tests-subpath-tc5fj" to be "success or failure" Mar 18 10:58:15.850: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.99475ms Mar 18 10:58:17.853: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02094362s Mar 18 10:58:19.857: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024684638s Mar 18 10:58:21.893: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 6.060974284s Mar 18 10:58:23.898: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 8.065340796s Mar 18 10:58:25.902: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 10.069635249s Mar 18 10:58:27.906: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 12.074143331s Mar 18 10:58:29.911: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 14.078510379s Mar 18 10:58:31.915: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 16.082882919s Mar 18 10:58:33.920: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 18.087266135s Mar 18 10:58:35.923: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 20.09116799s Mar 18 10:58:37.928: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 22.095697432s Mar 18 10:58:39.932: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Running", Reason="", readiness=false. Elapsed: 24.099531737s Mar 18 10:58:41.936: INFO: Pod "pod-subpath-test-secret-vt8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.103921678s STEP: Saw pod success Mar 18 10:58:41.936: INFO: Pod "pod-subpath-test-secret-vt8b" satisfied condition "success or failure" Mar 18 10:58:41.939: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-vt8b container test-container-subpath-secret-vt8b: STEP: delete the pod Mar 18 10:58:42.002: INFO: Waiting for pod pod-subpath-test-secret-vt8b to disappear Mar 18 10:58:42.005: INFO: Pod pod-subpath-test-secret-vt8b no longer exists STEP: Deleting pod pod-subpath-test-secret-vt8b Mar 18 10:58:42.005: INFO: Deleting pod "pod-subpath-test-secret-vt8b" in namespace "e2e-tests-subpath-tc5fj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:58:42.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-tc5fj" for this suite. Mar 18 10:58:48.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:58:48.043: INFO: namespace: e2e-tests-subpath-tc5fj, resource: bindings, ignored listing per whitelist Mar 18 10:58:48.092: INFO: namespace e2e-tests-subpath-tc5fj deletion completed in 6.080884341s • [SLOW TEST:32.370 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:58:48.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 10:59:16.198: INFO: Container started at 2020-03-18 10:58:50 +0000 UTC, pod became ready at 2020-03-18 10:59:14 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 10:59:16.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bnkvd" for this suite. Mar 18 10:59:38.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 10:59:38.232: INFO: namespace: e2e-tests-container-probe-bnkvd, resource: bindings, ignored listing per whitelist Mar 18 10:59:38.287: INFO: namespace e2e-tests-container-probe-bnkvd deletion completed in 22.084450824s • [SLOW TEST:50.195 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 10:59:38.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rf6wd Mar 18 10:59:42.425: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rf6wd STEP: checking the pod's current state and verifying that restartCount is present Mar 18 10:59:42.428: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:03:42.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rf6wd" for this suite. Mar 18 11:03:49.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:03:49.073: INFO: namespace: e2e-tests-container-probe-rf6wd, resource: bindings, ignored listing per whitelist Mar 18 11:03:49.078: INFO: namespace e2e-tests-container-probe-rf6wd deletion completed in 6.086242491s • [SLOW TEST:250.791 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:03:49.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-252ed632-6908-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:03:49.281: INFO: Waiting up to 5m0s for pod "pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-qbjww" to be "success or failure" Mar 18 11:03:49.285: INFO: Pod "pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025854ms Mar 18 11:03:51.289: INFO: Pod "pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008427629s Mar 18 11:03:53.292: INFO: Pod "pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011551329s STEP: Saw pod success Mar 18 11:03:53.292: INFO: Pod "pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:03:53.294: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 11:03:53.310: INFO: Waiting for pod pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f to disappear Mar 18 11:03:53.314: INFO: Pod pod-configmaps-252f4f00-6908-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:03:53.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qbjww" for this suite. Mar 18 11:03:59.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:03:59.398: INFO: namespace: e2e-tests-configmap-qbjww, resource: bindings, ignored listing per whitelist Mar 18 11:03:59.412: INFO: namespace e2e-tests-configmap-qbjww deletion completed in 6.094601813s • [SLOW TEST:10.334 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:03:59.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 18 11:03:59.510: INFO: Waiting up to 5m0s for pod "downward-api-2b47e082-6908-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-vg2mf" to be "success or failure" Mar 18 11:03:59.514: INFO: Pod "downward-api-2b47e082-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555182ms Mar 18 11:04:01.518: INFO: Pod "downward-api-2b47e082-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007329144s Mar 18 11:04:03.522: INFO: Pod "downward-api-2b47e082-6908-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011495526s STEP: Saw pod success Mar 18 11:04:03.522: INFO: Pod "downward-api-2b47e082-6908-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:04:03.525: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2b47e082-6908-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 11:04:03.581: INFO: Waiting for pod downward-api-2b47e082-6908-11ea-9856-0242ac11000f to disappear Mar 18 11:04:03.592: INFO: Pod downward-api-2b47e082-6908-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:04:03.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vg2mf" for this suite. Mar 18 11:04:09.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:04:09.676: INFO: namespace: e2e-tests-downward-api-vg2mf, resource: bindings, ignored listing per whitelist Mar 18 11:04:09.708: INFO: namespace e2e-tests-downward-api-vg2mf deletion completed in 6.112477641s • [SLOW TEST:10.296 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:04:09.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:04:09.874: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 18 11:04:14.878: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 11:04:14.878: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 18 11:04:16.882: INFO: Creating deployment "test-rollover-deployment" Mar 18 11:04:16.890: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 18 11:04:18.896: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 18 11:04:18.902: INFO: Ensure that both replica sets have 1 created replica Mar 18 11:04:18.908: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 18 11:04:18.915: INFO: Updating deployment test-rollover-deployment Mar 18 11:04:18.915: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 18 11:04:20.925: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 18 11:04:20.931: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 18 11:04:20.938: INFO: all replica sets need to contain the pod-template-hash label Mar 18 11:04:20.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126259, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:04:22.946: INFO: all replica sets need to contain the pod-template-hash label Mar 18 11:04:22.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126261, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:04:24.946: INFO: all replica sets need to contain the pod-template-hash label Mar 18 11:04:24.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126261, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:04:26.947: INFO: all replica sets need to contain the pod-template-hash label Mar 18 11:04:26.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126261, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:04:28.946: INFO: all replica sets need to contain the pod-template-hash label Mar 18 11:04:28.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126261, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:04:30.946: INFO: all replica sets need to contain the pod-template-hash label Mar 18 11:04:30.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126257, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126261, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720126256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:04:32.946: INFO: Mar 18 11:04:32.946: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 18 11:04:32.955: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-vfpc6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vfpc6/deployments/test-rollover-deployment,UID:35a45795-6908-11ea-99e8-0242ac110002,ResourceVersion:485427,Generation:2,CreationTimestamp:2020-03-18 11:04:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-18 11:04:17 +0000 UTC 2020-03-18 11:04:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-18 11:04:31 +0000 UTC 2020-03-18 11:04:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 18 11:04:32.959: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-vfpc6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vfpc6/replicasets/test-rollover-deployment-5b8479fdb6,UID:36da6f67-6908-11ea-99e8-0242ac110002,ResourceVersion:485418,Generation:2,CreationTimestamp:2020-03-18 11:04:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 35a45795-6908-11ea-99e8-0242ac110002 0xc0020fd467 0xc0020fd468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 18 11:04:32.959: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 18 11:04:32.959: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-vfpc6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vfpc6/replicasets/test-rollover-controller,UID:316bcdcd-6908-11ea-99e8-0242ac110002,ResourceVersion:485426,Generation:2,CreationTimestamp:2020-03-18 11:04:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 35a45795-6908-11ea-99e8-0242ac110002 0xc0020fd2d7 0xc0020fd2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 11:04:32.959: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-vfpc6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vfpc6/replicasets/test-rollover-deployment-58494b7559,UID:35a68849-6908-11ea-99e8-0242ac110002,ResourceVersion:485381,Generation:2,CreationTimestamp:2020-03-18 11:04:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 35a45795-6908-11ea-99e8-0242ac110002 0xc0020fd397 0xc0020fd398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 11:04:32.963: INFO: Pod "test-rollover-deployment-5b8479fdb6-btqnd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-btqnd,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-vfpc6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vfpc6/pods/test-rollover-deployment-5b8479fdb6-btqnd,UID:36efc77f-6908-11ea-99e8-0242ac110002,ResourceVersion:485396,Generation:0,CreationTimestamp:2020-03-18 11:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 36da6f67-6908-11ea-99e8-0242ac110002 0xc001981937 0xc001981938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-t9wv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9wv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t9wv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019819b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019819d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:04:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:04:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:04:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:04:19 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.244,StartTime:2020-03-18 11:04:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-18 11:04:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9ee81dab140e4943815707740928da6513b5ac360f8d277ddd56ae77f42fba22}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:04:32.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vfpc6" for this suite. Mar 18 11:04:38.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:04:38.991: INFO: namespace: e2e-tests-deployment-vfpc6, resource: bindings, ignored listing per whitelist Mar 18 11:04:39.054: INFO: namespace e2e-tests-deployment-vfpc6 deletion completed in 6.088174888s • [SLOW TEST:29.346 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:04:39.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 18 11:04:39.216: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-f4kzs" to be "success or failure" Mar 18 11:04:39.220: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103332ms Mar 18 11:04:41.242: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025690732s Mar 18 11:04:43.244: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028284406s STEP: Saw pod success Mar 18 11:04:43.244: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 18 11:04:43.246: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 18 11:04:43.345: INFO: Waiting for pod pod-host-path-test to disappear Mar 18 11:04:43.352: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:04:43.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-f4kzs" for this suite. Mar 18 11:04:49.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:04:49.428: INFO: namespace: e2e-tests-hostpath-f4kzs, resource: bindings, ignored listing per whitelist Mar 18 11:04:49.446: INFO: namespace e2e-tests-hostpath-f4kzs deletion completed in 6.092133793s • [SLOW TEST:10.392 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:04:49.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:04:53.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8nr2w" for this suite. Mar 18 11:04:59.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:04:59.768: INFO: namespace: e2e-tests-emptydir-wrapper-8nr2w, resource: bindings, ignored listing per whitelist Mar 18 11:04:59.838: INFO: namespace e2e-tests-emptydir-wrapper-8nr2w deletion completed in 6.118169577s • [SLOW TEST:10.392 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:04:59.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 11:04:59.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mw6l2' Mar 18 11:05:02.080: INFO: stderr: "" Mar 18 11:05:02.080: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 18 11:05:07.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mw6l2 -o json' Mar 18 11:05:07.224: INFO: stderr: "" Mar 18 11:05:07.224: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-18T11:05:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-mw6l2\",\n \"resourceVersion\": \"485598\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-mw6l2/pods/e2e-test-nginx-pod\",\n \"uid\": \"5092bbf7-6908-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hps9f\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hps9f\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hps9f\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T11:05:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T11:05:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T11:05:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T11:05:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f34440b8691e079562dc6aa1b56721c921d8b7eb43a33a4e1f976123bac6b8f1\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-18T11:05:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.185\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-18T11:05:02Z\"\n }\n}\n" STEP: replace the image in the pod Mar 18 11:05:07.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-mw6l2' Mar 18 11:05:07.470: INFO: stderr: "" Mar 18 11:05:07.470: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 18 11:05:07.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mw6l2' Mar 18 11:05:21.742: INFO: stderr: "" Mar 18 11:05:21.742: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:05:21.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mw6l2" for this suite. Mar 18 11:05:27.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:05:27.792: INFO: namespace: e2e-tests-kubectl-mw6l2, resource: bindings, ignored listing per whitelist Mar 18 11:05:27.832: INFO: namespace e2e-tests-kubectl-mw6l2 deletion completed in 6.081305879s • [SLOW TEST:27.993 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:05:27.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 18 11:05:27.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:27.980: INFO: Number of nodes with available pods: 0 Mar 18 11:05:27.981: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:28.984: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:28.987: INFO: Number of nodes with available pods: 0 Mar 18 11:05:28.987: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:29.985: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:29.987: INFO: Number of nodes with available pods: 0 Mar 18 11:05:29.987: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:30.996: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:31.008: INFO: Number of nodes with available pods: 1 Mar 18 11:05:31.008: INFO: Node hunter-worker2 is running more than one daemon pod Mar 18 11:05:31.986: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:31.989: INFO: Number of nodes with available pods: 2 Mar 18 11:05:31.989: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 18 11:05:32.006: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:32.009: INFO: Number of nodes with available pods: 1 Mar 18 11:05:32.009: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:33.013: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:33.016: INFO: Number of nodes with available pods: 1 Mar 18 11:05:33.016: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:34.055: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:34.058: INFO: Number of nodes with available pods: 1 Mar 18 11:05:34.058: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:35.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:35.017: INFO: Number of nodes with available pods: 1 Mar 18 11:05:35.017: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:36.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:36.018: INFO: Number of nodes with available pods: 1 Mar 18 11:05:36.018: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:37.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:37.017: INFO: Number of nodes with available pods: 1 Mar 18 11:05:37.017: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:38.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:38.018: INFO: Number of nodes with available pods: 1 Mar 18 11:05:38.018: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:39.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:39.018: INFO: Number of nodes with available pods: 1 Mar 18 11:05:39.018: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:40.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:40.016: INFO: Number of nodes with available pods: 1 Mar 18 11:05:40.016: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:41.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:41.017: INFO: Number of nodes with available pods: 1 Mar 18 11:05:41.017: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:42.013: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:42.016: INFO: Number of nodes with available pods: 1 Mar 18 11:05:42.016: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:43.014: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:43.017: INFO: Number of nodes with available pods: 1 Mar 18 11:05:43.017: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:05:44.033: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:05:44.035: INFO: Number of nodes with available pods: 2 Mar 18 11:05:44.035: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pjwgc, will wait for the garbage collector to delete the pods Mar 18 11:05:44.111: INFO: Deleting DaemonSet.extensions daemon-set took: 6.186697ms Mar 18 11:05:44.211: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.26213ms Mar 18 11:05:48.132: INFO: Number of nodes with available pods: 0 Mar 18 11:05:48.132: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 11:05:48.136: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pjwgc/daemonsets","resourceVersion":"485759"},"items":null} Mar 18 11:05:48.139: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pjwgc/pods","resourceVersion":"485759"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:05:48.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-pjwgc" for this suite. Mar 18 11:05:54.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:05:54.176: INFO: namespace: e2e-tests-daemonsets-pjwgc, resource: bindings, ignored listing per whitelist Mar 18 11:05:54.244: INFO: namespace e2e-tests-daemonsets-pjwgc deletion completed in 6.090422774s • [SLOW TEST:26.412 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:05:54.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6fbae52f-6908-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:05:54.365: INFO: Waiting up to 5m0s for pod "pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-hqbtc" to be "success or failure" Mar 18 11:05:54.390: INFO: Pod "pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.315517ms Mar 18 11:05:56.394: INFO: Pod "pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028472579s Mar 18 11:05:58.398: INFO: Pod "pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032591324s STEP: Saw pod success Mar 18 11:05:58.398: INFO: Pod "pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:05:58.401: INFO: Trying to get logs from node hunter-worker pod pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:05:58.436: INFO: Waiting for pod pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f to disappear Mar 18 11:05:58.445: INFO: Pod pod-secrets-6fbe059b-6908-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:05:58.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hqbtc" for this suite. Mar 18 11:06:04.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:06:04.496: INFO: namespace: e2e-tests-secrets-hqbtc, resource: bindings, ignored listing per whitelist Mar 18 11:06:04.535: INFO: namespace e2e-tests-secrets-hqbtc deletion completed in 6.086907302s • [SLOW TEST:10.292 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:06:04.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 18 11:06:04.614: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 18 11:06:04.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:04.940: INFO: stderr: "" Mar 18 11:06:04.940: INFO: stdout: "service/redis-slave created\n" Mar 18 11:06:04.940: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 18 11:06:04.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:05.275: INFO: stderr: "" Mar 18 11:06:05.275: INFO: stdout: "service/redis-master created\n" Mar 18 11:06:05.275: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 18 11:06:05.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:05.557: INFO: stderr: "" Mar 18 11:06:05.557: INFO: stdout: "service/frontend created\n" Mar 18 11:06:05.557: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 18 11:06:05.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:05.815: INFO: stderr: "" Mar 18 11:06:05.815: INFO: stdout: "deployment.extensions/frontend created\n" Mar 18 11:06:05.816: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 18 11:06:05.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:06.093: INFO: stderr: "" Mar 18 11:06:06.093: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 18 11:06:06.093: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 18 11:06:06.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:06.369: INFO: stderr: "" Mar 18 11:06:06.369: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 18 11:06:06.370: INFO: Waiting for all frontend pods to be Running. Mar 18 11:06:16.420: INFO: Waiting for frontend to serve content. Mar 18 11:06:16.440: INFO: Trying to add a new entry to the guestbook. Mar 18 11:06:16.456: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 18 11:06:16.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:16.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:06:16.606: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 18 11:06:16.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:16.795: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:06:16.795: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 18 11:06:16.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:16.947: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:06:16.947: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 18 11:06:16.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:17.067: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:06:17.067: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 18 11:06:17.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:17.192: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:06:17.192: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 18 11:06:17.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpj8b' Mar 18 11:06:17.506: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:06:17.506: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:06:17.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cpj8b" for this suite. Mar 18 11:07:03.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:07:03.674: INFO: namespace: e2e-tests-kubectl-cpj8b, resource: bindings, ignored listing per whitelist Mar 18 11:07:03.726: INFO: namespace e2e-tests-kubectl-cpj8b deletion completed in 46.119152702s • [SLOW TEST:59.190 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:07:03.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 18 11:07:03.843: INFO: Waiting up to 5m0s for pod "client-containers-99265ea4-6908-11ea-9856-0242ac11000f" in namespace "e2e-tests-containers-gvh7c" to be "success or failure" Mar 18 11:07:03.847: INFO: Pod "client-containers-99265ea4-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.823465ms Mar 18 11:07:05.864: INFO: Pod "client-containers-99265ea4-6908-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021015868s Mar 18 11:07:07.868: INFO: Pod "client-containers-99265ea4-6908-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025210526s STEP: Saw pod success Mar 18 11:07:07.868: INFO: Pod "client-containers-99265ea4-6908-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:07:07.871: INFO: Trying to get logs from node hunter-worker2 pod client-containers-99265ea4-6908-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:07:07.901: INFO: Waiting for pod client-containers-99265ea4-6908-11ea-9856-0242ac11000f to disappear Mar 18 11:07:07.913: INFO: Pod client-containers-99265ea4-6908-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:07:07.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-gvh7c" for this suite. Mar 18 11:07:13.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:07:13.963: INFO: namespace: e2e-tests-containers-gvh7c, resource: bindings, ignored listing per whitelist Mar 18 11:07:14.004: INFO: namespace e2e-tests-containers-gvh7c deletion completed in 6.088140178s • [SLOW TEST:10.278 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:07:14.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5q8q8 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-5q8q8 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-5q8q8 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-5q8q8 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-5q8q8 Mar 18 11:07:18.177: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-5q8q8, name: ss-0, uid: a13f2e85-6908-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Mar 18 11:07:18.796: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-5q8q8, name: ss-0, uid: a13f2e85-6908-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 18 11:07:18.820: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-5q8q8, name: ss-0, uid: a13f2e85-6908-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 18 11:07:18.833: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-5q8q8 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-5q8q8 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-5q8q8 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 18 11:07:22.925: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5q8q8 Mar 18 11:07:22.929: INFO: Scaling statefulset ss to 0 Mar 18 11:07:32.948: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 11:07:32.952: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:07:32.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5q8q8" for this suite. Mar 18 11:07:38.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:07:39.009: INFO: namespace: e2e-tests-statefulset-5q8q8, resource: bindings, ignored listing per whitelist Mar 18 11:07:39.053: INFO: namespace e2e-tests-statefulset-5q8q8 deletion completed in 6.084575722s • [SLOW TEST:25.048 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:07:39.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:08:39.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qrf62" for this suite. Mar 18 11:09:01.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:09:01.291: INFO: namespace: e2e-tests-container-probe-qrf62, resource: bindings, ignored listing per whitelist Mar 18 11:09:01.342: INFO: namespace e2e-tests-container-probe-qrf62 deletion completed in 22.169533492s • [SLOW TEST:82.289 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:09:01.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-df444f16-6908-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-df444f16-6908-11ea-9856-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:10:15.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-78g8q" for this suite. Mar 18 11:10:37.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:10:37.862: INFO: namespace: e2e-tests-projected-78g8q, resource: bindings, ignored listing per whitelist Mar 18 11:10:37.929: INFO: namespace e2e-tests-projected-78g8q deletion completed in 22.099360389s • [SLOW TEST:96.586 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:10:37.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 18 11:10:38.045: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hxd2d,SelfLink:/api/v1/namespaces/e2e-tests-watch-hxd2d/configmaps/e2e-watch-test-resource-version,UID:18cf8b5c-6909-11ea-99e8-0242ac110002,ResourceVersion:486774,Generation:0,CreationTimestamp:2020-03-18 11:10:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 11:10:38.045: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hxd2d,SelfLink:/api/v1/namespaces/e2e-tests-watch-hxd2d/configmaps/e2e-watch-test-resource-version,UID:18cf8b5c-6909-11ea-99e8-0242ac110002,ResourceVersion:486775,Generation:0,CreationTimestamp:2020-03-18 11:10:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:10:38.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hxd2d" for this suite. Mar 18 11:10:44.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:10:44.183: INFO: namespace: e2e-tests-watch-hxd2d, resource: bindings, ignored listing per whitelist Mar 18 11:10:44.186: INFO: namespace e2e-tests-watch-hxd2d deletion completed in 6.102835549s • [SLOW TEST:6.257 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:10:44.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:10:44.280: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.849774ms) Mar 18 11:10:44.283: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.875637ms) Mar 18 11:10:44.285: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.759791ms) Mar 18 11:10:44.288: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.900557ms) Mar 18 11:10:44.292: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.321779ms) Mar 18 11:10:44.294: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.794534ms) Mar 18 11:10:44.298: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.518657ms) Mar 18 11:10:44.301: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.24902ms) Mar 18 11:10:44.304: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.109552ms) Mar 18 11:10:44.308: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.318085ms) Mar 18 11:10:44.311: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.07473ms) Mar 18 11:10:44.335: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.877994ms) Mar 18 11:10:44.338: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.92952ms) Mar 18 11:10:44.342: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.002153ms) Mar 18 11:10:44.344: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.564709ms) Mar 18 11:10:44.347: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.763186ms) Mar 18 11:10:44.350: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.383057ms) Mar 18 11:10:44.352: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.433632ms) Mar 18 11:10:44.355: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.454257ms) Mar 18 11:10:44.358: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.944552ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:10:44.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-s5mgv" for this suite. Mar 18 11:10:50.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:10:50.445: INFO: namespace: e2e-tests-proxy-s5mgv, resource: bindings, ignored listing per whitelist Mar 18 11:10:50.455: INFO: namespace e2e-tests-proxy-s5mgv deletion completed in 6.094609105s • [SLOW TEST:6.269 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:10:50.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 11:10:50.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hh27s' Mar 18 11:10:50.704: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 11:10:50.704: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 18 11:10:50.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-hh27s' Mar 18 11:10:50.830: INFO: stderr: "" Mar 18 11:10:50.830: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:10:50.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hh27s" for this suite. Mar 18 11:11:12.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:11:12.967: INFO: namespace: e2e-tests-kubectl-hh27s, resource: bindings, ignored listing per whitelist Mar 18 11:11:12.968: INFO: namespace e2e-tests-kubectl-hh27s deletion completed in 22.134203081s • [SLOW TEST:22.513 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:11:12.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:11:13.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 18 11:11:13.221: INFO: stderr: "" Mar 18 11:11:13.221: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:11:13.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qj2jt" for this suite. Mar 18 11:11:19.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:11:19.271: INFO: namespace: e2e-tests-kubectl-qj2jt, resource: bindings, ignored listing per whitelist Mar 18 11:11:19.324: INFO: namespace e2e-tests-kubectl-qj2jt deletion completed in 6.090277281s • [SLOW TEST:6.355 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:11:19.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 18 11:11:19.444: INFO: Waiting up to 5m0s for pod "pod-31812982-6909-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-kvgr5" to be "success or failure" Mar 18 11:11:19.450: INFO: Pod "pod-31812982-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.780818ms Mar 18 11:11:21.479: INFO: Pod "pod-31812982-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034719615s Mar 18 11:11:23.483: INFO: Pod "pod-31812982-6909-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039105645s STEP: Saw pod success Mar 18 11:11:23.484: INFO: Pod "pod-31812982-6909-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:11:23.487: INFO: Trying to get logs from node hunter-worker2 pod pod-31812982-6909-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:11:23.504: INFO: Waiting for pod pod-31812982-6909-11ea-9856-0242ac11000f to disappear Mar 18 11:11:23.532: INFO: Pod pod-31812982-6909-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:11:23.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kvgr5" for this suite. Mar 18 11:11:29.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:11:29.575: INFO: namespace: e2e-tests-emptydir-kvgr5, resource: bindings, ignored listing per whitelist Mar 18 11:11:29.649: INFO: namespace e2e-tests-emptydir-kvgr5 deletion completed in 6.112775863s • [SLOW TEST:10.325 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:11:29.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 18 11:11:29.738: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 11:11:29.777: INFO: Waiting for terminating namespaces to be deleted... Mar 18 11:11:29.780: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 18 11:11:29.787: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 18 11:11:29.787: INFO: Container coredns ready: true, restart count 0 Mar 18 11:11:29.787: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 18 11:11:29.787: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 11:11:29.787: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 11:11:29.787: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 11:11:29.787: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 18 11:11:29.793: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 18 11:11:29.793: INFO: Container coredns ready: true, restart count 0 Mar 18 11:11:29.793: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 11:11:29.793: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 11:11:29.793: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 18 11:11:29.793: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 18 11:11:29.873: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Mar 18 11:11:29.873: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Mar 18 11:11:29.873: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Mar 18 11:11:29.873: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Mar 18 11:11:29.873: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Mar 18 11:11:29.873: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-37b9513f-6909-11ea-9856-0242ac11000f.15fd618a2768d02a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-6wjpz/filler-pod-37b9513f-6909-11ea-9856-0242ac11000f to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-37b9513f-6909-11ea-9856-0242ac11000f.15fd618a6f08f1a3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-37b9513f-6909-11ea-9856-0242ac11000f.15fd618aaa8b5973], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-37b9513f-6909-11ea-9856-0242ac11000f.15fd618ac5718bd7], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-37bc8bff-6909-11ea-9856-0242ac11000f.15fd618a287c95d7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-6wjpz/filler-pod-37bc8bff-6909-11ea-9856-0242ac11000f to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-37bc8bff-6909-11ea-9856-0242ac11000f.15fd618a998218e7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-37bc8bff-6909-11ea-9856-0242ac11000f.15fd618ac6e201ee], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-37bc8bff-6909-11ea-9856-0242ac11000f.15fd618ad5e88cfe], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fd618b17e33c73], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:11:35.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-6wjpz" for this suite. Mar 18 11:11:41.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:11:41.143: INFO: namespace: e2e-tests-sched-pred-6wjpz, resource: bindings, ignored listing per whitelist Mar 18 11:11:41.218: INFO: namespace e2e-tests-sched-pred-6wjpz deletion completed in 6.155145157s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.569 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:11:41.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-j68rh;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-j68rh;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-j68rh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-j68rh.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j68rh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 24.219.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.219.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.219.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.219.24_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-j68rh;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-j68rh;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j68rh.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-j68rh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j68rh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-j68rh.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j68rh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 24.219.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.219.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.219.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.219.24_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 11:11:47.420: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.440: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.461: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.463: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.465: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.468: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.470: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.472: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.475: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.478: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:47.495: INFO: Lookups using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-j68rh jessie_tcp@dns-test-service.e2e-tests-dns-j68rh jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc] Mar 18 11:11:52.501: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.545: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.548: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.552: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.555: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.559: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.562: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.565: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.568: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:52.589: INFO: Lookups using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-j68rh jessie_tcp@dns-test-service.e2e-tests-dns-j68rh jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc] Mar 18 11:11:57.500: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.522: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.546: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.549: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.553: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.556: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.559: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.563: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.566: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.569: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:11:57.590: INFO: Lookups using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-j68rh jessie_tcp@dns-test-service.e2e-tests-dns-j68rh jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc] Mar 18 11:12:02.500: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.547: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.550: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.554: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.557: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.560: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.564: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.567: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.570: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:02.595: INFO: Lookups using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-j68rh jessie_tcp@dns-test-service.e2e-tests-dns-j68rh jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc] Mar 18 11:12:07.500: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.546: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.549: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.553: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.556: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.559: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.563: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.566: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.569: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:07.589: INFO: Lookups using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-j68rh jessie_tcp@dns-test-service.e2e-tests-dns-j68rh jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc] Mar 18 11:12:12.500: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.547: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.550: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.554: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.557: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.561: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.564: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.567: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.570: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc from pod e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f: the server could not find the requested resource (get pods dns-test-3e93b78d-6909-11ea-9856-0242ac11000f) Mar 18 11:12:12.590: INFO: Lookups using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-j68rh jessie_tcp@dns-test-service.e2e-tests-dns-j68rh jessie_udp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@dns-test-service.e2e-tests-dns-j68rh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j68rh.svc] Mar 18 11:12:17.583: INFO: DNS probes using e2e-tests-dns-j68rh/dns-test-3e93b78d-6909-11ea-9856-0242ac11000f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:12:17.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-j68rh" for this suite. Mar 18 11:12:24.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:12:24.152: INFO: namespace: e2e-tests-dns-j68rh, resource: bindings, ignored listing per whitelist Mar 18 11:12:24.195: INFO: namespace e2e-tests-dns-j68rh deletion completed in 6.328794817s • [SLOW TEST:42.976 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:12:24.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 18 11:12:24.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 18 11:12:24.419: INFO: stderr: "" Mar 18 11:12:24.419: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:12:24.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7gbrl" for this suite. Mar 18 11:12:30.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:12:30.493: INFO: namespace: e2e-tests-kubectl-7gbrl, resource: bindings, ignored listing per whitelist Mar 18 11:12:30.512: INFO: namespace e2e-tests-kubectl-7gbrl deletion completed in 6.089870233s • [SLOW TEST:6.317 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:12:30.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:12:30.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-d5nkg" to be "success or failure" Mar 18 11:12:30.649: INFO: Pod "downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.925083ms Mar 18 11:12:32.653: INFO: Pod "downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025129445s Mar 18 11:12:34.657: INFO: Pod "downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029408358s STEP: Saw pod success Mar 18 11:12:34.657: INFO: Pod "downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:12:34.661: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:12:34.690: INFO: Waiting for pod downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f to disappear Mar 18 11:12:34.725: INFO: Pod downwardapi-volume-5be9a374-6909-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:12:34.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d5nkg" for this suite. Mar 18 11:12:40.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:12:40.763: INFO: namespace: e2e-tests-projected-d5nkg, resource: bindings, ignored listing per whitelist Mar 18 11:12:40.812: INFO: namespace e2e-tests-projected-d5nkg deletion completed in 6.083542693s • [SLOW TEST:10.300 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:12:40.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 18 11:12:40.913: INFO: Waiting up to 5m0s for pod "var-expansion-620c8df6-6909-11ea-9856-0242ac11000f" in namespace "e2e-tests-var-expansion-wsj9x" to be "success or failure" Mar 18 11:12:40.919: INFO: Pod "var-expansion-620c8df6-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136161ms Mar 18 11:12:42.923: INFO: Pod "var-expansion-620c8df6-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010448493s Mar 18 11:12:44.928: INFO: Pod "var-expansion-620c8df6-6909-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014513482s STEP: Saw pod success Mar 18 11:12:44.928: INFO: Pod "var-expansion-620c8df6-6909-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:12:44.931: INFO: Trying to get logs from node hunter-worker pod var-expansion-620c8df6-6909-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 11:12:44.951: INFO: Waiting for pod var-expansion-620c8df6-6909-11ea-9856-0242ac11000f to disappear Mar 18 11:12:44.956: INFO: Pod var-expansion-620c8df6-6909-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:12:44.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-wsj9x" for this suite. Mar 18 11:12:51.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:12:51.076: INFO: namespace: e2e-tests-var-expansion-wsj9x, resource: bindings, ignored listing per whitelist Mar 18 11:12:51.086: INFO: namespace e2e-tests-var-expansion-wsj9x deletion completed in 6.109072497s • [SLOW TEST:10.274 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:12:51.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 18 11:12:51.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-cx9gk run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 18 11:12:54.280: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0318 11:12:54.200866 821 log.go:172] (0xc000684160) (0xc0003f0d20) Create stream\nI0318 11:12:54.200926 821 log.go:172] (0xc000684160) (0xc0003f0d20) Stream added, broadcasting: 1\nI0318 11:12:54.204291 821 log.go:172] (0xc000684160) Reply frame received for 1\nI0318 11:12:54.204344 821 log.go:172] (0xc000684160) (0xc000884f00) Create stream\nI0318 11:12:54.204360 821 log.go:172] (0xc000684160) (0xc000884f00) Stream added, broadcasting: 3\nI0318 11:12:54.205515 821 log.go:172] (0xc000684160) Reply frame received for 3\nI0318 11:12:54.205567 821 log.go:172] (0xc000684160) (0xc0003ba500) Create stream\nI0318 11:12:54.205583 821 log.go:172] (0xc000684160) (0xc0003ba500) Stream added, broadcasting: 5\nI0318 11:12:54.206689 821 log.go:172] (0xc000684160) Reply frame received for 5\nI0318 11:12:54.206742 821 log.go:172] (0xc000684160) (0xc000884fa0) Create stream\nI0318 11:12:54.206775 821 log.go:172] (0xc000684160) (0xc000884fa0) Stream added, broadcasting: 7\nI0318 11:12:54.207848 821 log.go:172] (0xc000684160) Reply frame received for 7\nI0318 11:12:54.208012 821 log.go:172] (0xc000884f00) (3) Writing data frame\nI0318 11:12:54.208154 821 log.go:172] (0xc000884f00) (3) Writing data frame\nI0318 11:12:54.209082 821 log.go:172] (0xc000684160) Data frame received for 5\nI0318 11:12:54.209264 821 log.go:172] (0xc0003ba500) (5) Data frame handling\nI0318 11:12:54.209301 821 log.go:172] (0xc0003ba500) (5) Data frame sent\nI0318 11:12:54.209946 821 log.go:172] (0xc000684160) Data frame received for 5\nI0318 11:12:54.209969 821 log.go:172] (0xc0003ba500) (5) Data frame handling\nI0318 11:12:54.209997 821 log.go:172] (0xc0003ba500) (5) Data frame sent\nI0318 11:12:54.255030 821 log.go:172] (0xc000684160) Data frame received for 5\nI0318 11:12:54.255260 821 log.go:172] (0xc0003ba500) (5) Data frame handling\nI0318 11:12:54.255327 821 log.go:172] (0xc000684160) Data frame received for 7\nI0318 11:12:54.255352 821 log.go:172] (0xc000884fa0) (7) Data frame handling\nI0318 11:12:54.255756 821 log.go:172] (0xc000684160) Data frame received for 1\nI0318 11:12:54.255802 821 log.go:172] (0xc0003f0d20) (1) Data frame handling\nI0318 11:12:54.255836 821 log.go:172] (0xc000684160) (0xc000884f00) Stream removed, broadcasting: 3\nI0318 11:12:54.255868 821 log.go:172] (0xc0003f0d20) (1) Data frame sent\nI0318 11:12:54.255943 821 log.go:172] (0xc000684160) (0xc0003f0d20) Stream removed, broadcasting: 1\nI0318 11:12:54.255998 821 log.go:172] (0xc000684160) Go away received\nI0318 11:12:54.256149 821 log.go:172] (0xc000684160) (0xc0003f0d20) Stream removed, broadcasting: 1\nI0318 11:12:54.256175 821 log.go:172] (0xc000684160) (0xc000884f00) Stream removed, broadcasting: 3\nI0318 11:12:54.256191 821 log.go:172] (0xc000684160) (0xc0003ba500) Stream removed, broadcasting: 5\nI0318 11:12:54.256223 821 log.go:172] (0xc000684160) (0xc000884fa0) Stream removed, broadcasting: 7\n" Mar 18 11:12:54.280: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:12:56.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cx9gk" for this suite. Mar 18 11:13:02.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:13:02.369: INFO: namespace: e2e-tests-kubectl-cx9gk, resource: bindings, ignored listing per whitelist Mar 18 11:13:02.387: INFO: namespace e2e-tests-kubectl-cx9gk deletion completed in 6.09721977s • [SLOW TEST:11.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:13:02.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 18 11:13:02.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:13:02.556: INFO: Number of nodes with available pods: 0 Mar 18 11:13:02.557: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:13:03.637: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:13:03.640: INFO: Number of nodes with available pods: 0 Mar 18 11:13:03.640: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:13:04.561: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:13:04.564: INFO: Number of nodes with available pods: 0 Mar 18 11:13:04.564: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:13:05.560: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:13:05.563: INFO: Number of nodes with available pods: 0 Mar 18 11:13:05.563: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:13:06.561: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:13:06.564: INFO: Number of nodes with available pods: 2 Mar 18 11:13:06.564: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 18 11:13:06.595: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 11:13:06.624: INFO: Number of nodes with available pods: 2 Mar 18 11:13:06.624: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bbq2s, will wait for the garbage collector to delete the pods Mar 18 11:13:07.699: INFO: Deleting DaemonSet.extensions daemon-set took: 7.178661ms Mar 18 11:13:07.800: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.436379ms Mar 18 11:13:21.803: INFO: Number of nodes with available pods: 0 Mar 18 11:13:21.803: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 11:13:21.806: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bbq2s/daemonsets","resourceVersion":"487404"},"items":null} Mar 18 11:13:21.808: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bbq2s/pods","resourceVersion":"487404"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:13:21.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bbq2s" for this suite. Mar 18 11:13:27.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:13:27.888: INFO: namespace: e2e-tests-daemonsets-bbq2s, resource: bindings, ignored listing per whitelist Mar 18 11:13:27.914: INFO: namespace e2e-tests-daemonsets-bbq2s deletion completed in 6.092640348s • [SLOW TEST:25.527 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:13:27.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 18 11:13:36.070: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:36.090: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:38.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:38.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:40.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:40.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:42.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:42.095: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:44.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:44.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:46.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:46.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:48.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:48.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:50.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:50.093: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:52.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:52.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:54.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:54.094: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 11:13:56.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 11:13:56.094: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:13:56.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-md6kd" for this suite. Mar 18 11:14:18.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:14:18.184: INFO: namespace: e2e-tests-container-lifecycle-hook-md6kd, resource: bindings, ignored listing per whitelist Mar 18 11:14:18.225: INFO: namespace e2e-tests-container-lifecycle-hook-md6kd deletion completed in 22.119661268s • [SLOW TEST:50.311 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:14:18.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 18 11:14:18.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:18.618: INFO: stderr: "" Mar 18 11:14:18.618: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 11:14:18.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:18.737: INFO: stderr: "" Mar 18 11:14:18.737: INFO: stdout: "update-demo-nautilus-2rqlk update-demo-nautilus-smkdt " Mar 18 11:14:18.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rqlk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:18.832: INFO: stderr: "" Mar 18 11:14:18.832: INFO: stdout: "" Mar 18 11:14:18.832: INFO: update-demo-nautilus-2rqlk is created but not running Mar 18 11:14:23.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:23.943: INFO: stderr: "" Mar 18 11:14:23.943: INFO: stdout: "update-demo-nautilus-2rqlk update-demo-nautilus-smkdt " Mar 18 11:14:23.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rqlk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:24.045: INFO: stderr: "" Mar 18 11:14:24.045: INFO: stdout: "true" Mar 18 11:14:24.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rqlk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:24.145: INFO: stderr: "" Mar 18 11:14:24.145: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:14:24.145: INFO: validating pod update-demo-nautilus-2rqlk Mar 18 11:14:24.150: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:14:24.150: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:14:24.150: INFO: update-demo-nautilus-2rqlk is verified up and running Mar 18 11:14:24.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smkdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:24.249: INFO: stderr: "" Mar 18 11:14:24.249: INFO: stdout: "true" Mar 18 11:14:24.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smkdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:24.353: INFO: stderr: "" Mar 18 11:14:24.353: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:14:24.353: INFO: validating pod update-demo-nautilus-smkdt Mar 18 11:14:24.357: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:14:24.357: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:14:24.357: INFO: update-demo-nautilus-smkdt is verified up and running STEP: rolling-update to new replication controller Mar 18 11:14:24.359: INFO: scanned /root for discovery docs: Mar 18 11:14:24.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:46.843: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 18 11:14:46.843: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 11:14:46.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:46.941: INFO: stderr: "" Mar 18 11:14:46.941: INFO: stdout: "update-demo-kitten-5qlgj update-demo-kitten-zhnr7 " Mar 18 11:14:46.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5qlgj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:47.042: INFO: stderr: "" Mar 18 11:14:47.042: INFO: stdout: "true" Mar 18 11:14:47.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5qlgj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:47.144: INFO: stderr: "" Mar 18 11:14:47.144: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 18 11:14:47.144: INFO: validating pod update-demo-kitten-5qlgj Mar 18 11:14:47.148: INFO: got data: { "image": "kitten.jpg" } Mar 18 11:14:47.148: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 18 11:14:47.148: INFO: update-demo-kitten-5qlgj is verified up and running Mar 18 11:14:47.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zhnr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:47.243: INFO: stderr: "" Mar 18 11:14:47.243: INFO: stdout: "true" Mar 18 11:14:47.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zhnr7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4kdf' Mar 18 11:14:47.343: INFO: stderr: "" Mar 18 11:14:47.343: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 18 11:14:47.343: INFO: validating pod update-demo-kitten-zhnr7 Mar 18 11:14:47.347: INFO: got data: { "image": "kitten.jpg" } Mar 18 11:14:47.347: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 18 11:14:47.347: INFO: update-demo-kitten-zhnr7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:14:47.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v4kdf" for this suite. Mar 18 11:15:09.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:15:09.405: INFO: namespace: e2e-tests-kubectl-v4kdf, resource: bindings, ignored listing per whitelist Mar 18 11:15:09.443: INFO: namespace e2e-tests-kubectl-v4kdf deletion completed in 22.092599713s • [SLOW TEST:51.218 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:15:09.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-baa8a5f6-6909-11ea-9856-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-baa8a64d-6909-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-baa8a5f6-6909-11ea-9856-0242ac11000f STEP: Updating configmap cm-test-opt-upd-baa8a64d-6909-11ea-9856-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-baa8a671-6909-11ea-9856-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:16:34.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9wxsm" for this suite. Mar 18 11:16:56.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:16:56.110: INFO: namespace: e2e-tests-configmap-9wxsm, resource: bindings, ignored listing per whitelist Mar 18 11:16:56.174: INFO: namespace e2e-tests-configmap-9wxsm deletion completed in 22.102436747s • [SLOW TEST:106.731 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:16:56.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 18 11:16:56.273: INFO: Waiting up to 5m0s for pod "var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f" in namespace "e2e-tests-var-expansion-7f295" to be "success or failure" Mar 18 11:16:56.290: INFO: Pod "var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.743194ms Mar 18 11:16:58.294: INFO: Pod "var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020784939s Mar 18 11:17:00.298: INFO: Pod "var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025328451s STEP: Saw pod success Mar 18 11:17:00.298: INFO: Pod "var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:17:00.301: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 11:17:00.324: INFO: Waiting for pod var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f to disappear Mar 18 11:17:00.352: INFO: Pod var-expansion-fa44e6ef-6909-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:17:00.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7f295" for this suite. Mar 18 11:17:06.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:17:06.412: INFO: namespace: e2e-tests-var-expansion-7f295, resource: bindings, ignored listing per whitelist Mar 18 11:17:06.460: INFO: namespace e2e-tests-var-expansion-7f295 deletion completed in 6.104442454s • [SLOW TEST:10.285 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:17:06.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ttktz STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 11:17:06.533: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 11:17:32.647: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.201:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ttktz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 11:17:32.647: INFO: >>> kubeConfig: /root/.kube/config I0318 11:17:32.683472 6 log.go:172] (0xc0022682c0) (0xc001f74320) Create stream I0318 11:17:32.683502 6 log.go:172] (0xc0022682c0) (0xc001f74320) Stream added, broadcasting: 1 I0318 11:17:32.686104 6 log.go:172] (0xc0022682c0) Reply frame received for 1 I0318 11:17:32.686172 6 log.go:172] (0xc0022682c0) (0xc000354780) Create stream I0318 11:17:32.686196 6 log.go:172] (0xc0022682c0) (0xc000354780) Stream added, broadcasting: 3 I0318 11:17:32.687017 6 log.go:172] (0xc0022682c0) Reply frame received for 3 I0318 11:17:32.687053 6 log.go:172] (0xc0022682c0) (0xc0003548c0) Create stream I0318 11:17:32.687064 6 log.go:172] (0xc0022682c0) (0xc0003548c0) Stream added, broadcasting: 5 I0318 11:17:32.687726 6 log.go:172] (0xc0022682c0) Reply frame received for 5 I0318 11:17:32.765586 6 log.go:172] (0xc0022682c0) Data frame received for 5 I0318 11:17:32.765624 6 log.go:172] (0xc0003548c0) (5) Data frame handling I0318 11:17:32.765650 6 log.go:172] (0xc0022682c0) Data frame received for 3 I0318 11:17:32.765662 6 log.go:172] (0xc000354780) (3) Data frame handling I0318 11:17:32.765678 6 log.go:172] (0xc000354780) (3) Data frame sent I0318 11:17:32.765690 6 log.go:172] (0xc0022682c0) Data frame received for 3 I0318 11:17:32.765701 6 log.go:172] (0xc000354780) (3) Data frame handling I0318 11:17:32.767703 6 log.go:172] (0xc0022682c0) Data frame received for 1 I0318 11:17:32.767740 6 log.go:172] (0xc001f74320) (1) Data frame handling I0318 11:17:32.767786 6 log.go:172] (0xc001f74320) (1) Data frame sent I0318 11:17:32.767820 6 log.go:172] (0xc0022682c0) (0xc001f74320) Stream removed, broadcasting: 1 I0318 11:17:32.767930 6 log.go:172] (0xc0022682c0) Go away received I0318 11:17:32.767973 6 log.go:172] (0xc0022682c0) (0xc001f74320) Stream removed, broadcasting: 1 I0318 11:17:32.768009 6 log.go:172] (0xc0022682c0) (0xc000354780) Stream removed, broadcasting: 3 I0318 11:17:32.768037 6 log.go:172] (0xc0022682c0) (0xc0003548c0) Stream removed, broadcasting: 5 Mar 18 11:17:32.768: INFO: Found all expected endpoints: [netserver-0] Mar 18 11:17:32.771: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.13:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ttktz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 11:17:32.771: INFO: >>> kubeConfig: /root/.kube/config I0318 11:17:32.804990 6 log.go:172] (0xc0017d84d0) (0xc0003be000) Create stream I0318 11:17:32.805018 6 log.go:172] (0xc0017d84d0) (0xc0003be000) Stream added, broadcasting: 1 I0318 11:17:32.807366 6 log.go:172] (0xc0017d84d0) Reply frame received for 1 I0318 11:17:32.807438 6 log.go:172] (0xc0017d84d0) (0xc0003be6e0) Create stream I0318 11:17:32.807472 6 log.go:172] (0xc0017d84d0) (0xc0003be6e0) Stream added, broadcasting: 3 I0318 11:17:32.808594 6 log.go:172] (0xc0017d84d0) Reply frame received for 3 I0318 11:17:32.808629 6 log.go:172] (0xc0017d84d0) (0xc001f743c0) Create stream I0318 11:17:32.808640 6 log.go:172] (0xc0017d84d0) (0xc001f743c0) Stream added, broadcasting: 5 I0318 11:17:32.809705 6 log.go:172] (0xc0017d84d0) Reply frame received for 5 I0318 11:17:32.883348 6 log.go:172] (0xc0017d84d0) Data frame received for 5 I0318 11:17:32.883389 6 log.go:172] (0xc001f743c0) (5) Data frame handling I0318 11:17:32.883440 6 log.go:172] (0xc0017d84d0) Data frame received for 3 I0318 11:17:32.883475 6 log.go:172] (0xc0003be6e0) (3) Data frame handling I0318 11:17:32.883531 6 log.go:172] (0xc0003be6e0) (3) Data frame sent I0318 11:17:32.883571 6 log.go:172] (0xc0017d84d0) Data frame received for 3 I0318 11:17:32.883596 6 log.go:172] (0xc0003be6e0) (3) Data frame handling I0318 11:17:32.884745 6 log.go:172] (0xc0017d84d0) Data frame received for 1 I0318 11:17:32.884771 6 log.go:172] (0xc0003be000) (1) Data frame handling I0318 11:17:32.884795 6 log.go:172] (0xc0003be000) (1) Data frame sent I0318 11:17:32.884816 6 log.go:172] (0xc0017d84d0) (0xc0003be000) Stream removed, broadcasting: 1 I0318 11:17:32.884840 6 log.go:172] (0xc0017d84d0) Go away received I0318 11:17:32.885013 6 log.go:172] (0xc0017d84d0) (0xc0003be000) Stream removed, broadcasting: 1 I0318 11:17:32.885055 6 log.go:172] (0xc0017d84d0) (0xc0003be6e0) Stream removed, broadcasting: 3 I0318 11:17:32.885068 6 log.go:172] (0xc0017d84d0) (0xc001f743c0) Stream removed, broadcasting: 5 Mar 18 11:17:32.885: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:17:32.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-ttktz" for this suite. Mar 18 11:17:54.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:17:54.938: INFO: namespace: e2e-tests-pod-network-test-ttktz, resource: bindings, ignored listing per whitelist Mar 18 11:17:54.977: INFO: namespace e2e-tests-pod-network-test-ttktz deletion completed in 22.08751834s • [SLOW TEST:48.517 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:17:54.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-4w95 STEP: Creating a pod to test atomic-volume-subpath Mar 18 11:17:55.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4w95" in namespace "e2e-tests-subpath-2rwwv" to be "success or failure" Mar 18 11:17:55.128: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Pending", Reason="", readiness=false. Elapsed: 17.743278ms Mar 18 11:17:57.132: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021248358s Mar 18 11:17:59.136: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025911122s Mar 18 11:18:01.140: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 6.029694962s Mar 18 11:18:03.144: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 8.033860497s Mar 18 11:18:05.149: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 10.038732679s Mar 18 11:18:07.153: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 12.042778484s Mar 18 11:18:09.157: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 14.047008666s Mar 18 11:18:11.162: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 16.051051607s Mar 18 11:18:13.166: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 18.055508079s Mar 18 11:18:15.170: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 20.059808561s Mar 18 11:18:17.175: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 22.064205812s Mar 18 11:18:19.179: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Running", Reason="", readiness=false. Elapsed: 24.068326084s Mar 18 11:18:21.188: INFO: Pod "pod-subpath-test-configmap-4w95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077519623s STEP: Saw pod success Mar 18 11:18:21.188: INFO: Pod "pod-subpath-test-configmap-4w95" satisfied condition "success or failure" Mar 18 11:18:21.191: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-4w95 container test-container-subpath-configmap-4w95: STEP: delete the pod Mar 18 11:18:21.254: INFO: Waiting for pod pod-subpath-test-configmap-4w95 to disappear Mar 18 11:18:21.268: INFO: Pod pod-subpath-test-configmap-4w95 no longer exists STEP: Deleting pod pod-subpath-test-configmap-4w95 Mar 18 11:18:21.268: INFO: Deleting pod "pod-subpath-test-configmap-4w95" in namespace "e2e-tests-subpath-2rwwv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:18:21.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2rwwv" for this suite. Mar 18 11:18:27.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:18:27.333: INFO: namespace: e2e-tests-subpath-2rwwv, resource: bindings, ignored listing per whitelist Mar 18 11:18:27.414: INFO: namespace e2e-tests-subpath-2rwwv deletion completed in 6.116637706s • [SLOW TEST:32.437 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:18:27.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 18 11:18:32.039: INFO: Successfully updated pod "annotationupdate30a36603-690a-11ea-9856-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:18:34.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dm57n" for this suite. Mar 18 11:18:56.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:18:56.102: INFO: namespace: e2e-tests-downward-api-dm57n, resource: bindings, ignored listing per whitelist Mar 18 11:18:56.152: INFO: namespace e2e-tests-downward-api-dm57n deletion completed in 22.091213785s • [SLOW TEST:28.738 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:18:56.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-41c9bef7-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:18:56.276: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-fh5pc" to be "success or failure" Mar 18 11:18:56.281: INFO: Pod "pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.832659ms Mar 18 11:18:58.285: INFO: Pod "pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008896898s Mar 18 11:19:00.289: INFO: Pod "pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013159496s STEP: Saw pod success Mar 18 11:19:00.289: INFO: Pod "pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:19:00.292: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 18 11:19:00.326: INFO: Waiting for pod pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:19:00.340: INFO: Pod pod-projected-secrets-41cbdd12-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:19:00.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fh5pc" for this suite. Mar 18 11:19:06.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:19:06.381: INFO: namespace: e2e-tests-projected-fh5pc, resource: bindings, ignored listing per whitelist Mar 18 11:19:06.446: INFO: namespace e2e-tests-projected-fh5pc deletion completed in 6.102262335s • [SLOW TEST:10.294 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:19:06.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-47e9e44a-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:19:06.544: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-nwbfh" to be "success or failure" Mar 18 11:19:06.560: INFO: Pod "pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.617133ms Mar 18 11:19:08.564: INFO: Pod "pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019630633s Mar 18 11:19:10.576: INFO: Pod "pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031149655s STEP: Saw pod success Mar 18 11:19:10.576: INFO: Pod "pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:19:10.578: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 18 11:19:10.604: INFO: Waiting for pod pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:19:10.608: INFO: Pod pod-projected-secrets-47ea9f2e-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:19:10.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nwbfh" for this suite. Mar 18 11:19:16.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:19:16.707: INFO: namespace: e2e-tests-projected-nwbfh, resource: bindings, ignored listing per whitelist Mar 18 11:19:16.731: INFO: namespace e2e-tests-projected-nwbfh deletion completed in 6.120401515s • [SLOW TEST:10.285 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:19:16.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 18 11:19:16.833: INFO: Waiting up to 5m0s for pod "client-containers-4e0bb855-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-containers-dsl5v" to be "success or failure" Mar 18 11:19:16.836: INFO: Pod "client-containers-4e0bb855-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.445856ms Mar 18 11:19:18.851: INFO: Pod "client-containers-4e0bb855-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017833605s Mar 18 11:19:20.854: INFO: Pod "client-containers-4e0bb855-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02136528s STEP: Saw pod success Mar 18 11:19:20.854: INFO: Pod "client-containers-4e0bb855-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:19:20.856: INFO: Trying to get logs from node hunter-worker2 pod client-containers-4e0bb855-690a-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:19:20.868: INFO: Waiting for pod client-containers-4e0bb855-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:19:20.874: INFO: Pod client-containers-4e0bb855-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:19:20.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dsl5v" for this suite. Mar 18 11:19:26.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:19:26.928: INFO: namespace: e2e-tests-containers-dsl5v, resource: bindings, ignored listing per whitelist Mar 18 11:19:26.967: INFO: namespace e2e-tests-containers-dsl5v deletion completed in 6.091122574s • [SLOW TEST:10.236 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:19:26.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:19:27.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-qf6l2" to be "success or failure" Mar 18 11:19:27.112: INFO: Pod "downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.401343ms Mar 18 11:19:29.116: INFO: Pod "downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018227005s Mar 18 11:19:31.120: INFO: Pod "downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022479614s STEP: Saw pod success Mar 18 11:19:31.120: INFO: Pod "downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:19:31.123: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:19:31.146: INFO: Waiting for pod downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:19:31.150: INFO: Pod downwardapi-volume-54294e26-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:19:31.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qf6l2" for this suite. Mar 18 11:19:37.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:19:37.215: INFO: namespace: e2e-tests-downward-api-qf6l2, resource: bindings, ignored listing per whitelist Mar 18 11:19:37.262: INFO: namespace e2e-tests-downward-api-qf6l2 deletion completed in 6.109117259s • [SLOW TEST:10.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:19:37.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:19:37.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-hw9b7" to be "success or failure" Mar 18 11:19:37.414: INFO: Pod "downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.038335ms Mar 18 11:19:39.418: INFO: Pod "downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020742709s Mar 18 11:19:41.423: INFO: Pod "downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025396143s STEP: Saw pod success Mar 18 11:19:41.423: INFO: Pod "downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:19:41.426: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:19:41.449: INFO: Waiting for pod downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:19:41.454: INFO: Pod downwardapi-volume-5a4ea886-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:19:41.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hw9b7" for this suite. Mar 18 11:19:47.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:19:47.612: INFO: namespace: e2e-tests-projected-hw9b7, resource: bindings, ignored listing per whitelist Mar 18 11:19:47.626: INFO: namespace e2e-tests-projected-hw9b7 deletion completed in 6.168138267s • [SLOW TEST:10.364 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:19:47.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-60756924-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:19:47.823: INFO: Waiting up to 5m0s for pod "pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-l2gpg" to be "success or failure" Mar 18 11:19:47.827: INFO: Pod "pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.862654ms Mar 18 11:19:49.831: INFO: Pod "pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007845886s Mar 18 11:19:51.835: INFO: Pod "pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011758617s STEP: Saw pod success Mar 18 11:19:51.835: INFO: Pod "pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:19:51.837: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:19:51.879: INFO: Waiting for pod pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:19:51.905: INFO: Pod pod-secrets-6084ce35-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:19:51.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l2gpg" for this suite. Mar 18 11:19:57.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:19:57.955: INFO: namespace: e2e-tests-secrets-l2gpg, resource: bindings, ignored listing per whitelist Mar 18 11:19:58.015: INFO: namespace e2e-tests-secrets-l2gpg deletion completed in 6.106284904s STEP: Destroying namespace "e2e-tests-secret-namespace-vxs8n" for this suite. Mar 18 11:20:04.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:20:04.124: INFO: namespace: e2e-tests-secret-namespace-vxs8n, resource: bindings, ignored listing per whitelist Mar 18 11:20:04.128: INFO: namespace e2e-tests-secret-namespace-vxs8n deletion completed in 6.113181488s • [SLOW TEST:16.502 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:20:04.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-plwmk [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 18 11:20:04.283: INFO: Found 0 stateful pods, waiting for 3 Mar 18 11:20:14.288: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 11:20:14.288: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 11:20:14.288: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 18 11:20:14.316: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 18 11:20:24.389: INFO: Updating stateful set ss2 Mar 18 11:20:24.400: INFO: Waiting for Pod e2e-tests-statefulset-plwmk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 18 11:20:34.523: INFO: Found 1 stateful pods, waiting for 3 Mar 18 11:20:44.528: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 11:20:44.528: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 11:20:44.528: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 18 11:20:44.551: INFO: Updating stateful set ss2 Mar 18 11:20:44.568: INFO: Waiting for Pod e2e-tests-statefulset-plwmk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 11:20:54.591: INFO: Updating stateful set ss2 Mar 18 11:20:54.617: INFO: Waiting for StatefulSet e2e-tests-statefulset-plwmk/ss2 to complete update Mar 18 11:20:54.617: INFO: Waiting for Pod e2e-tests-statefulset-plwmk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 18 11:21:04.624: INFO: Deleting all statefulset in ns e2e-tests-statefulset-plwmk Mar 18 11:21:04.627: INFO: Scaling statefulset ss2 to 0 Mar 18 11:21:24.646: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 11:21:24.650: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:21:24.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-plwmk" for this suite. Mar 18 11:21:30.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:21:30.794: INFO: namespace: e2e-tests-statefulset-plwmk, resource: bindings, ignored listing per whitelist Mar 18 11:21:30.810: INFO: namespace e2e-tests-statefulset-plwmk deletion completed in 6.144272241s • [SLOW TEST:86.682 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:21:30.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9df93106-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:21:30.937: INFO: Waiting up to 5m0s for pod "pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-xmfcm" to be "success or failure" Mar 18 11:21:30.941: INFO: Pod "pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137574ms Mar 18 11:21:32.945: INFO: Pod "pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008010257s Mar 18 11:21:34.949: INFO: Pod "pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012326042s STEP: Saw pod success Mar 18 11:21:34.949: INFO: Pod "pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:21:34.952: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:21:34.972: INFO: Waiting for pod pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:21:34.983: INFO: Pod pod-secrets-9dfb28c6-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:21:34.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xmfcm" for this suite. Mar 18 11:21:41.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:21:41.025: INFO: namespace: e2e-tests-secrets-xmfcm, resource: bindings, ignored listing per whitelist Mar 18 11:21:41.091: INFO: namespace e2e-tests-secrets-xmfcm deletion completed in 6.104197567s • [SLOW TEST:10.280 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:21:41.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 18 11:21:41.234: INFO: Waiting up to 5m0s for pod "client-containers-a41862c8-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-containers-k92h8" to be "success or failure" Mar 18 11:21:41.291: INFO: Pod "client-containers-a41862c8-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 57.163113ms Mar 18 11:21:43.296: INFO: Pod "client-containers-a41862c8-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061606191s Mar 18 11:21:45.302: INFO: Pod "client-containers-a41862c8-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068061871s STEP: Saw pod success Mar 18 11:21:45.302: INFO: Pod "client-containers-a41862c8-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:21:45.305: INFO: Trying to get logs from node hunter-worker2 pod client-containers-a41862c8-690a-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:21:45.322: INFO: Waiting for pod client-containers-a41862c8-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:21:45.326: INFO: Pod client-containers-a41862c8-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:21:45.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-k92h8" for this suite. Mar 18 11:21:51.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:21:51.390: INFO: namespace: e2e-tests-containers-k92h8, resource: bindings, ignored listing per whitelist Mar 18 11:21:51.421: INFO: namespace e2e-tests-containers-k92h8 deletion completed in 6.091019755s • [SLOW TEST:10.330 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:21:51.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 18 11:21:51.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7x2pw' Mar 18 11:21:53.627: INFO: stderr: "" Mar 18 11:21:53.627: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 18 11:21:54.632: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:21:54.632: INFO: Found 0 / 1 Mar 18 11:21:55.632: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:21:55.632: INFO: Found 0 / 1 Mar 18 11:21:56.632: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:21:56.632: INFO: Found 0 / 1 Mar 18 11:21:57.632: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:21:57.632: INFO: Found 1 / 1 Mar 18 11:21:57.632: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 18 11:21:57.636: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:21:57.636: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 11:21:57.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-24wzt --namespace=e2e-tests-kubectl-7x2pw -p {"metadata":{"annotations":{"x":"y"}}}' Mar 18 11:21:57.736: INFO: stderr: "" Mar 18 11:21:57.736: INFO: stdout: "pod/redis-master-24wzt patched\n" STEP: checking annotations Mar 18 11:21:57.739: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:21:57.739: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:21:57.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7x2pw" for this suite. Mar 18 11:22:19.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:22:19.824: INFO: namespace: e2e-tests-kubectl-7x2pw, resource: bindings, ignored listing per whitelist Mar 18 11:22:19.836: INFO: namespace e2e-tests-kubectl-7x2pw deletion completed in 22.093481567s • [SLOW TEST:28.414 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:22:19.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 18 11:22:19.955: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:22:26.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-q4srv" for this suite. Mar 18 11:22:48.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:22:48.339: INFO: namespace: e2e-tests-init-container-q4srv, resource: bindings, ignored listing per whitelist Mar 18 11:22:48.377: INFO: namespace e2e-tests-init-container-q4srv deletion completed in 22.084500556s • [SLOW TEST:28.542 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:22:48.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:22:48.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-5m7z5" to be "success or failure" Mar 18 11:22:48.517: INFO: Pod "downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.332808ms Mar 18 11:22:50.521: INFO: Pod "downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008895561s Mar 18 11:22:52.526: INFO: Pod "downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014036849s STEP: Saw pod success Mar 18 11:22:52.526: INFO: Pod "downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:22:52.528: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:22:52.565: INFO: Waiting for pod downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:22:52.592: INFO: Pod downwardapi-volume-cc30a12c-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:22:52.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5m7z5" for this suite. Mar 18 11:22:58.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:22:58.667: INFO: namespace: e2e-tests-downward-api-5m7z5, resource: bindings, ignored listing per whitelist Mar 18 11:22:58.709: INFO: namespace e2e-tests-downward-api-5m7z5 deletion completed in 6.114294037s • [SLOW TEST:10.332 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:22:58.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:22:58.844: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 18 11:22:58.850: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-92w96/daemonsets","resourceVersion":"489463"},"items":null} Mar 18 11:22:58.853: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-92w96/pods","resourceVersion":"489463"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:22:58.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-92w96" for this suite. Mar 18 11:23:04.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:23:04.983: INFO: namespace: e2e-tests-daemonsets-92w96, resource: bindings, ignored listing per whitelist Mar 18 11:23:04.999: INFO: namespace e2e-tests-daemonsets-92w96 deletion completed in 6.13618753s S [SKIPPING] [6.290 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:22:58.844: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:23:05.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-d61e3d93-690a-11ea-9856-0242ac11000f STEP: Creating secret with name secret-projected-all-test-volume-d61e3d7e-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test Check all projections for projected volume plugin Mar 18 11:23:05.150: INFO: Waiting up to 5m0s for pod "projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-62dm2" to be "success or failure" Mar 18 11:23:05.165: INFO: Pod "projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.909545ms Mar 18 11:23:07.168: INFO: Pod "projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018707592s Mar 18 11:23:09.172: INFO: Pod "projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022572051s STEP: Saw pod success Mar 18 11:23:09.172: INFO: Pod "projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:23:09.175: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f container projected-all-volume-test: STEP: delete the pod Mar 18 11:23:09.202: INFO: Waiting for pod projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:23:09.206: INFO: Pod projected-volume-d61e3d41-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:23:09.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-62dm2" for this suite. Mar 18 11:23:15.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:23:15.298: INFO: namespace: e2e-tests-projected-62dm2, resource: bindings, ignored listing per whitelist Mar 18 11:23:15.371: INFO: namespace e2e-tests-projected-62dm2 deletion completed in 6.158173173s • [SLOW TEST:10.371 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:23:15.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dc4e24a6-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:23:15.603: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-9q494" to be "success or failure" Mar 18 11:23:15.606: INFO: Pod "pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646788ms Mar 18 11:23:17.610: INFO: Pod "pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006713775s Mar 18 11:23:19.614: INFO: Pod "pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010715385s STEP: Saw pod success Mar 18 11:23:19.614: INFO: Pod "pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:23:19.617: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 11:23:19.654: INFO: Waiting for pod pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:23:19.670: INFO: Pod pod-projected-configmaps-dc522e0e-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:23:19.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9q494" for this suite. Mar 18 11:23:25.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:23:25.721: INFO: namespace: e2e-tests-projected-9q494, resource: bindings, ignored listing per whitelist Mar 18 11:23:25.768: INFO: namespace e2e-tests-projected-9q494 deletion completed in 6.094848306s • [SLOW TEST:10.397 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:23:25.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:23:25.870: INFO: Creating deployment "nginx-deployment" Mar 18 11:23:25.891: INFO: Waiting for observed generation 1 Mar 18 11:23:27.909: INFO: Waiting for all required pods to come up Mar 18 11:23:27.914: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 18 11:23:35.923: INFO: Waiting for deployment "nginx-deployment" to complete Mar 18 11:23:35.947: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 18 11:23:35.954: INFO: Updating deployment nginx-deployment Mar 18 11:23:35.954: INFO: Waiting for observed generation 2 Mar 18 11:23:38.004: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 18 11:23:38.007: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 18 11:23:38.010: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 18 11:23:38.017: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 18 11:23:38.017: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 18 11:23:38.019: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 18 11:23:38.023: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 18 11:23:38.023: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 18 11:23:38.029: INFO: Updating deployment nginx-deployment Mar 18 11:23:38.029: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 18 11:23:38.070: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 18 11:23:38.105: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 18 11:23:38.206: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5b6ch/deployments/nginx-deployment,UID:e27de4e7-690a-11ea-99e8-0242ac110002,ResourceVersion:489793,Generation:3,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-18 11:23:36 +0000 UTC 2020-03-18 11:23:25 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-03-18 11:23:38 +0000 UTC 2020-03-18 11:23:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 18 11:23:38.355: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5b6ch/replicasets/nginx-deployment-5c98f8fb5,UID:e8815cbd-690a-11ea-99e8-0242ac110002,ResourceVersion:489826,Generation:3,CreationTimestamp:2020-03-18 11:23:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e27de4e7-690a-11ea-99e8-0242ac110002 0xc001998447 0xc001998448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 11:23:38.355: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 18 11:23:38.355: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5b6ch/replicasets/nginx-deployment-85ddf47c5d,UID:e283767a-690a-11ea-99e8-0242ac110002,ResourceVersion:489815,Generation:3,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e27de4e7-690a-11ea-99e8-0242ac110002 0xc001998567 0xc001998568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 18 11:23:38.449: INFO: Pod "nginx-deployment-5c98f8fb5-4bdw9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4bdw9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-4bdw9,UID:e88d0676-690a-11ea-99e8-0242ac110002,ResourceVersion:489755,Generation:0,CreationTimestamp:2020-03-18 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001999b57 0xc001999b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001999c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001999cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-18 11:23:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.449: INFO: Pod "nginx-deployment-5c98f8fb5-7btdd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7btdd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-7btdd,UID:e88cf221-690a-11ea-99e8-0242ac110002,ResourceVersion:489746,Generation:0,CreationTimestamp:2020-03-18 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001999d70 0xc001999d71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76090} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f76100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-18 11:23:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.449: INFO: Pod "nginx-deployment-5c98f8fb5-98hhs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-98hhs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-98hhs,UID:e9ced4df-690a-11ea-99e8-0242ac110002,ResourceVersion:489812,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76300 0xc001f76301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76380} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f763a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.449: INFO: Pod "nginx-deployment-5c98f8fb5-b2rrj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b2rrj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-b2rrj,UID:e88b62ab-690a-11ea-99e8-0242ac110002,ResourceVersion:489745,Generation:0,CreationTimestamp:2020-03-18 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76580 0xc001f76581}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f76860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-18 11:23:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.449: INFO: Pod "nginx-deployment-5c98f8fb5-b6zlr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b6zlr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-b6zlr,UID:e9d41724-690a-11ea-99e8-0242ac110002,ResourceVersion:489825,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76990 0xc001f76991}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f76a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-c75jf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c75jf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-c75jf,UID:e9c38d3d-690a-11ea-99e8-0242ac110002,ResourceVersion:489836,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76b50 0xc001f76b51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f76bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-18 11:23:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-dzlsg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dzlsg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-dzlsg,UID:e9c90d0e-690a-11ea-99e8-0242ac110002,ResourceVersion:489807,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76cb0 0xc001f76cb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f76dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-gh98f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gh98f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-gh98f,UID:e8a2c3ed-690a-11ea-99e8-0242ac110002,ResourceVersion:489764,Generation:0,CreationTimestamp:2020-03-18 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76e30 0xc001f76e31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f76eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f76ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-18 11:23:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-j9ww7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j9ww7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-j9ww7,UID:e8a5cc0a-690a-11ea-99e8-0242ac110002,ResourceVersion:489768,Generation:0,CreationTimestamp:2020-03-18 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f76f90 0xc001f76f91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77010} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-18 11:23:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-lbp26" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lbp26,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-lbp26,UID:e9c917ed-690a-11ea-99e8-0242ac110002,ResourceVersion:489810,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f770f0 0xc001f770f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77170} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-td972" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-td972,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-td972,UID:e9cee771-690a-11ea-99e8-0242ac110002,ResourceVersion:489818,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f77200 0xc001f77201}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77280} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f772a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.450: INFO: Pod "nginx-deployment-5c98f8fb5-ts48h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ts48h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-ts48h,UID:e9cebc28-690a-11ea-99e8-0242ac110002,ResourceVersion:489817,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f77310 0xc001f77311}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77390} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f773b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-5c98f8fb5-zp2c9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zp2c9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-5c98f8fb5-zp2c9,UID:e9ced713-690a-11ea-99e8-0242ac110002,ResourceVersion:489814,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e8815cbd-690a-11ea-99e8-0242ac110002 0xc001f77420 0xc001f77421}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f774a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f774c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-85ddf47c5d-2qpwb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2qpwb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-2qpwb,UID:e9ceffb0-690a-11ea-99e8-0242ac110002,ResourceVersion:489823,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f77530 0xc001f77531}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f775a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f775c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-85ddf47c5d-6ffh8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6ffh8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-6ffh8,UID:e28821ac-690a-11ea-99e8-0242ac110002,ResourceVersion:489675,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f77630 0xc001f77631}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f776a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f776c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.213,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bcf2a02cb0b51f4e2da0ab4b5f85bdefc7df0e4cd1c334f5c4dd8ff244bbbfe7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-85ddf47c5d-7fj5m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7fj5m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-7fj5m,UID:e28c2316-690a-11ea-99e8-0242ac110002,ResourceVersion:489703,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f77780 0xc001f77781}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f777f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.31,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://63418cd0bce245791fbd32cfaab632956e413c18958d04ea364f38dcf4248c03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-85ddf47c5d-89m8g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-89m8g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-89m8g,UID:e9c910eb-690a-11ea-99e8-0242ac110002,ResourceVersion:489808,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f778d0 0xc001f778d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77950} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-85ddf47c5d-8pqqz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8pqqz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-8pqqz,UID:e9bed0da-690a-11ea-99e8-0242ac110002,ResourceVersion:489832,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f77a00 0xc001f77a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-18 11:23:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.451: INFO: Pod "nginx-deployment-85ddf47c5d-b2kh4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b2kh4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-b2kh4,UID:e9cefaaf-690a-11ea-99e8-0242ac110002,ResourceVersion:489821,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f77b40 0xc001f77b41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.452: INFO: Pod "nginx-deployment-85ddf47c5d-fprqx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fprqx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-fprqx,UID:e28c1a64-690a-11ea-99e8-0242ac110002,ResourceVersion:489706,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc001f77dc0 0xc001f77dc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f77f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f77fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.216,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://868952de012a8907c77ecbb09366d54b700a66fe05127911ab4ea8040ff04f11}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.452: INFO: Pod "nginx-deployment-85ddf47c5d-g6xbf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g6xbf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-g6xbf,UID:e287913e-690a-11ea-99e8-0242ac110002,ResourceVersion:489668,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002128490 0xc002128491}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002128660} {node.kubernetes.io/unreachable Exists NoExecute 0xc002128760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.212,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0fcf78beae6afa910ed6a4900162574f80378d5ed9d8eb19d1b6db013110a7af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.452: INFO: Pod "nginx-deployment-85ddf47c5d-grhrf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-grhrf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-grhrf,UID:e28c133d-690a-11ea-99e8-0242ac110002,ResourceVersion:489697,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002128910 0xc002128911}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002128980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021289a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.30,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dd11a8610ee2c106d451895eba490c409c5183bcbfe7c7be629705a489b1c72a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.452: INFO: Pod "nginx-deployment-85ddf47c5d-kb7fm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kb7fm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-kb7fm,UID:e9cef922-690a-11ea-99e8-0242ac110002,ResourceVersion:489822,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002128d90 0xc002128d91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002128e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002128ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.452: INFO: Pod "nginx-deployment-85ddf47c5d-m2jsz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m2jsz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-m2jsz,UID:e2881c04-690a-11ea-99e8-0242ac110002,ResourceVersion:489653,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002129030 0xc002129031}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002129190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021291b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.28,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://faacdc9775d83bb15ed431a8d478c713c281524f6998bae30393a36248a990c9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.452: INFO: Pod "nginx-deployment-85ddf47c5d-nm4n4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nm4n4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-nm4n4,UID:e9cef4e9-690a-11ea-99e8-0242ac110002,ResourceVersion:489819,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002129290 0xc002129291}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021293e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002129400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-qh9fx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qh9fx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-qh9fx,UID:e9c90c10-690a-11ea-99e8-0242ac110002,ResourceVersion:489799,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc0021294f0 0xc0021294f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002129660} {node.kubernetes.io/unreachable Exists NoExecute 0xc002129700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-qlqbd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qlqbd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-qlqbd,UID:e9c3a58e-690a-11ea-99e8-0242ac110002,ResourceVersion:489800,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002129c40 0xc002129c41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002129d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002129d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-qvmgn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qvmgn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-qvmgn,UID:e9c9158b-690a-11ea-99e8-0242ac110002,ResourceVersion:489811,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc002129e90 0xc002129e91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00130a330} {node.kubernetes.io/unreachable Exists NoExecute 0xc00130a350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-sxcrd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sxcrd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-sxcrd,UID:e9c388c6-690a-11ea-99e8-0242ac110002,ResourceVersion:489795,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc00130a480 0xc00130a481}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00130a4f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00130a590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-tt2rg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tt2rg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-tt2rg,UID:e9c91ab6-690a-11ea-99e8-0242ac110002,ResourceVersion:489809,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc00130a900 0xc00130a901}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00130a970} {node.kubernetes.io/unreachable Exists NoExecute 0xc00130a990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-vj9sm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vj9sm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-vj9sm,UID:e9cf0626-690a-11ea-99e8-0242ac110002,ResourceVersion:489820,Generation:0,CreationTimestamp:2020-03-18 11:23:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc00130b1b0 0xc00130b1b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00130b220} {node.kubernetes.io/unreachable Exists NoExecute 0xc00130b240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.453: INFO: Pod "nginx-deployment-85ddf47c5d-vxmc7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vxmc7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-vxmc7,UID:e28916d5-690a-11ea-99e8-0242ac110002,ResourceVersion:489681,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc00130b2b0 0xc00130b2b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00130b5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00130b610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.29,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b4487eb73084d323566903152fc928557b9f358ef4d65dadc3f888e07c300492}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 11:23:38.454: INFO: Pod "nginx-deployment-85ddf47c5d-zwwbt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zwwbt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-5b6ch,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5b6ch/pods/nginx-deployment-85ddf47c5d-zwwbt,UID:e2892079-690a-11ea-99e8-0242ac110002,ResourceVersion:489700,Generation:0,CreationTimestamp:2020-03-18 11:23:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e283767a-690a-11ea-99e8-0242ac110002 0xc00130b760 0xc00130b761}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qhrzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qhrzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qhrzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00130b820} {node.kubernetes.io/unreachable Exists NoExecute 0xc00130b850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:23:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.32,StartTime:2020-03-18 11:23:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:23:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5bffb8bbd17b301b90735824e55bc53c6478af5b6761181834590dc1bd3eda6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:23:38.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5b6ch" for this suite. Mar 18 11:23:56.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:23:56.648: INFO: namespace: e2e-tests-deployment-5b6ch, resource: bindings, ignored listing per whitelist Mar 18 11:23:56.696: INFO: namespace e2e-tests-deployment-5b6ch deletion completed in 18.20227889s • [SLOW TEST:30.927 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:23:56.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-f508568d-690a-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:23:56.988: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-jt5pd" to be "success or failure" Mar 18 11:23:56.992: INFO: Pod "pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74487ms Mar 18 11:23:58.996: INFO: Pod "pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007904048s Mar 18 11:24:01.000: INFO: Pod "pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012205722s STEP: Saw pod success Mar 18 11:24:01.001: INFO: Pod "pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:24:01.004: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:24:01.037: INFO: Waiting for pod pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f to disappear Mar 18 11:24:01.070: INFO: Pod pod-projected-secrets-f508d600-690a-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:24:01.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jt5pd" for this suite. Mar 18 11:24:07.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:24:07.124: INFO: namespace: e2e-tests-projected-jt5pd, resource: bindings, ignored listing per whitelist Mar 18 11:24:07.166: INFO: namespace e2e-tests-projected-jt5pd deletion completed in 6.092187348s • [SLOW TEST:10.470 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:24:07.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 18 11:24:07.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25rct' Mar 18 11:24:07.482: INFO: stderr: "" Mar 18 11:24:07.482: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 18 11:24:08.487: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:24:08.487: INFO: Found 0 / 1 Mar 18 11:24:09.514: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:24:09.514: INFO: Found 0 / 1 Mar 18 11:24:10.486: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:24:10.486: INFO: Found 0 / 1 Mar 18 11:24:11.487: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:24:11.487: INFO: Found 1 / 1 Mar 18 11:24:11.487: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 11:24:11.490: INFO: Selector matched 1 pods for map[app:redis] Mar 18 11:24:11.490: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 18 11:24:11.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-llrcv redis-master --namespace=e2e-tests-kubectl-25rct' Mar 18 11:24:11.603: INFO: stderr: "" Mar 18 11:24:11.603: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Mar 11:24:10.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Mar 11:24:10.224 # Server started, Redis version 3.2.12\n1:M 18 Mar 11:24:10.224 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Mar 11:24:10.224 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 18 11:24:11.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-llrcv redis-master --namespace=e2e-tests-kubectl-25rct --tail=1' Mar 18 11:24:11.707: INFO: stderr: "" Mar 18 11:24:11.707: INFO: stdout: "1:M 18 Mar 11:24:10.224 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 18 11:24:11.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-llrcv redis-master --namespace=e2e-tests-kubectl-25rct --limit-bytes=1' Mar 18 11:24:11.813: INFO: stderr: "" Mar 18 11:24:11.813: INFO: stdout: " " STEP: exposing timestamps Mar 18 11:24:11.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-llrcv redis-master --namespace=e2e-tests-kubectl-25rct --tail=1 --timestamps' Mar 18 11:24:11.927: INFO: stderr: "" Mar 18 11:24:11.927: INFO: stdout: "2020-03-18T11:24:10.224531424Z 1:M 18 Mar 11:24:10.224 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 18 11:24:14.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-llrcv redis-master --namespace=e2e-tests-kubectl-25rct --since=1s' Mar 18 11:24:14.549: INFO: stderr: "" Mar 18 11:24:14.549: INFO: stdout: "" Mar 18 11:24:14.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-llrcv redis-master --namespace=e2e-tests-kubectl-25rct --since=24h' Mar 18 11:24:14.658: INFO: stderr: "" Mar 18 11:24:14.658: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Mar 11:24:10.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Mar 11:24:10.224 # Server started, Redis version 3.2.12\n1:M 18 Mar 11:24:10.224 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Mar 11:24:10.224 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 18 11:24:14.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25rct' Mar 18 11:24:14.760: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:24:14.760: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 18 11:24:14.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-25rct' Mar 18 11:24:14.864: INFO: stderr: "No resources found.\n" Mar 18 11:24:14.864: INFO: stdout: "" Mar 18 11:24:14.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-25rct -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 11:24:14.947: INFO: stderr: "" Mar 18 11:24:14.947: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:24:14.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-25rct" for this suite. Mar 18 11:24:37.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:24:37.090: INFO: namespace: e2e-tests-kubectl-25rct, resource: bindings, ignored listing per whitelist Mar 18 11:24:37.158: INFO: namespace e2e-tests-kubectl-25rct deletion completed in 22.207622911s • [SLOW TEST:29.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:24:37.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:25:03.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-6f4ll" for this suite. Mar 18 11:25:09.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:25:09.824: INFO: namespace: e2e-tests-container-runtime-6f4ll, resource: bindings, ignored listing per whitelist Mar 18 11:25:09.868: INFO: namespace e2e-tests-container-runtime-6f4ll deletion completed in 6.091878477s • [SLOW TEST:32.710 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:25:09.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:25:09.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-cp82m" to be "success or failure" Mar 18 11:25:09.981: INFO: Pod "downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.913829ms Mar 18 11:25:11.984: INFO: Pod "downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020599573s Mar 18 11:25:13.988: INFO: Pod "downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02479482s STEP: Saw pod success Mar 18 11:25:13.989: INFO: Pod "downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:25:13.992: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:25:14.056: INFO: Waiting for pod downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:25:14.064: INFO: Pod downwardapi-volume-20872b36-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:25:14.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cp82m" for this suite. Mar 18 11:25:20.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:25:20.148: INFO: namespace: e2e-tests-projected-cp82m, resource: bindings, ignored listing per whitelist Mar 18 11:25:20.178: INFO: namespace e2e-tests-projected-cp82m deletion completed in 6.11051136s • [SLOW TEST:10.310 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:25:20.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:25:27.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-kdqd4" for this suite. Mar 18 11:25:49.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:25:49.357: INFO: namespace: e2e-tests-replication-controller-kdqd4, resource: bindings, ignored listing per whitelist Mar 18 11:25:49.425: INFO: namespace e2e-tests-replication-controller-kdqd4 deletion completed in 22.099439346s • [SLOW TEST:29.246 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:25:49.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-lcp82 Mar 18 11:25:53.560: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-lcp82 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 11:25:53.563: INFO: Initial restart count of pod liveness-exec is 0 Mar 18 11:26:43.796: INFO: Restart count of pod e2e-tests-container-probe-lcp82/liveness-exec is now 1 (50.232661045s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:26:43.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lcp82" for this suite. Mar 18 11:26:49.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:26:49.900: INFO: namespace: e2e-tests-container-probe-lcp82, resource: bindings, ignored listing per whitelist Mar 18 11:26:49.929: INFO: namespace e2e-tests-container-probe-lcp82 deletion completed in 6.08993273s • [SLOW TEST:60.504 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:26:49.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:26:54.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2dqgh" for this suite. Mar 18 11:27:00.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:27:00.103: INFO: namespace: e2e-tests-kubelet-test-2dqgh, resource: bindings, ignored listing per whitelist Mar 18 11:27:00.153: INFO: namespace e2e-tests-kubelet-test-2dqgh deletion completed in 6.096494414s • [SLOW TEST:10.224 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:27:00.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-h8mhg/secret-test-6244f626-690b-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:27:00.267: INFO: Waiting up to 5m0s for pod "pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-h8mhg" to be "success or failure" Mar 18 11:27:00.271: INFO: Pod "pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718159ms Mar 18 11:27:02.301: INFO: Pod "pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033761308s Mar 18 11:27:04.305: INFO: Pod "pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037686296s STEP: Saw pod success Mar 18 11:27:04.305: INFO: Pod "pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:27:04.307: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f container env-test: STEP: delete the pod Mar 18 11:27:04.356: INFO: Waiting for pod pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:27:04.359: INFO: Pod pod-configmaps-6246f9a5-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:27:04.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-h8mhg" for this suite. Mar 18 11:27:10.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:27:10.408: INFO: namespace: e2e-tests-secrets-h8mhg, resource: bindings, ignored listing per whitelist Mar 18 11:27:10.457: INFO: namespace e2e-tests-secrets-h8mhg deletion completed in 6.09415381s • [SLOW TEST:10.304 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:27:10.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 18 11:27:10.602: INFO: Waiting up to 5m0s for pod "client-containers-686df858-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-containers-xkqfm" to be "success or failure" Mar 18 11:27:10.605: INFO: Pod "client-containers-686df858-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296916ms Mar 18 11:27:12.625: INFO: Pod "client-containers-686df858-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022660407s Mar 18 11:27:14.628: INFO: Pod "client-containers-686df858-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026473989s STEP: Saw pod success Mar 18 11:27:14.628: INFO: Pod "client-containers-686df858-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:27:14.631: INFO: Trying to get logs from node hunter-worker pod client-containers-686df858-690b-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:27:14.661: INFO: Waiting for pod client-containers-686df858-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:27:14.672: INFO: Pod client-containers-686df858-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:27:14.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-xkqfm" for this suite. Mar 18 11:27:20.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:27:20.722: INFO: namespace: e2e-tests-containers-xkqfm, resource: bindings, ignored listing per whitelist Mar 18 11:27:20.763: INFO: namespace e2e-tests-containers-xkqfm deletion completed in 6.088467214s • [SLOW TEST:10.306 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:27:20.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6e8d820c-690b-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:27:20.872: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-rbdn4" to be "success or failure" Mar 18 11:27:20.876: INFO: Pod "pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37109ms Mar 18 11:27:22.880: INFO: Pod "pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008283357s Mar 18 11:27:24.884: INFO: Pod "pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012559344s STEP: Saw pod success Mar 18 11:27:24.885: INFO: Pod "pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:27:24.887: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 11:27:24.902: INFO: Waiting for pod pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:27:24.906: INFO: Pod pod-projected-configmaps-6e8f0e7e-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:27:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rbdn4" for this suite. Mar 18 11:27:30.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:27:31.008: INFO: namespace: e2e-tests-projected-rbdn4, resource: bindings, ignored listing per whitelist Mar 18 11:27:31.028: INFO: namespace e2e-tests-projected-rbdn4 deletion completed in 6.11947408s • [SLOW TEST:10.265 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:27:31.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:27:31.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-8dm5c" to be "success or failure" Mar 18 11:27:31.134: INFO: Pod "downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599394ms Mar 18 11:27:33.139: INFO: Pod "downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00897943s Mar 18 11:27:35.144: INFO: Pod "downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01392001s STEP: Saw pod success Mar 18 11:27:35.144: INFO: Pod "downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:27:35.148: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:27:35.171: INFO: Waiting for pod downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:27:35.175: INFO: Pod downwardapi-volume-74aab8bc-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:27:35.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8dm5c" for this suite. Mar 18 11:27:41.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:27:41.204: INFO: namespace: e2e-tests-projected-8dm5c, resource: bindings, ignored listing per whitelist Mar 18 11:27:41.265: INFO: namespace e2e-tests-projected-8dm5c deletion completed in 6.087492918s • [SLOW TEST:10.236 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:27:41.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:27:41.421: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7ac9672c-690b-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00111344a), BlockOwnerDeletion:(*bool)(0xc00111344b)}} Mar 18 11:27:41.446: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7ac86cec-690b-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00245c772), BlockOwnerDeletion:(*bool)(0xc00245c773)}} Mar 18 11:27:41.451: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7ac8ee63-690b-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001113632), BlockOwnerDeletion:(*bool)(0xc001113633)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:27:46.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4p2wl" for this suite. Mar 18 11:27:52.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:27:52.567: INFO: namespace: e2e-tests-gc-4p2wl, resource: bindings, ignored listing per whitelist Mar 18 11:27:52.605: INFO: namespace e2e-tests-gc-4p2wl deletion completed in 6.097668093s • [SLOW TEST:11.339 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:27:52.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:27:52.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-8b77s" to be "success or failure" Mar 18 11:27:52.739: INFO: Pod "downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936901ms Mar 18 11:27:54.743: INFO: Pod "downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007329037s Mar 18 11:27:56.752: INFO: Pod "downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016496505s STEP: Saw pod success Mar 18 11:27:56.752: INFO: Pod "downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:27:56.755: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:27:56.823: INFO: Waiting for pod downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:27:56.841: INFO: Pod downwardapi-volume-818d27d8-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:27:56.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8b77s" for this suite. Mar 18 11:28:02.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:28:03.024: INFO: namespace: e2e-tests-downward-api-8b77s, resource: bindings, ignored listing per whitelist Mar 18 11:28:03.036: INFO: namespace e2e-tests-downward-api-8b77s deletion completed in 6.191713701s • [SLOW TEST:10.431 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:28:03.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 18 11:28:03.136: INFO: Waiting up to 5m0s for pod "downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-hkhkz" to be "success or failure" Mar 18 11:28:03.140: INFO: Pod "downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.818995ms Mar 18 11:28:05.144: INFO: Pod "downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008322344s Mar 18 11:28:07.149: INFO: Pod "downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012806254s STEP: Saw pod success Mar 18 11:28:07.149: INFO: Pod "downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:28:07.152: INFO: Trying to get logs from node hunter-worker pod downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 11:28:07.206: INFO: Waiting for pod downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:28:07.209: INFO: Pod downward-api-87bf5cfc-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:28:07.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hkhkz" for this suite. Mar 18 11:28:13.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:28:13.243: INFO: namespace: e2e-tests-downward-api-hkhkz, resource: bindings, ignored listing per whitelist Mar 18 11:28:13.310: INFO: namespace e2e-tests-downward-api-hkhkz deletion completed in 6.096998359s • [SLOW TEST:10.274 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:28:13.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 18 11:28:17.495: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-8de44182-690b-11ea-9856-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-pods-ps4ql", SelfLink:"/api/v1/namespaces/e2e-tests-pods-ps4ql/pods/pod-submit-remove-8de44182-690b-11ea-9856-0242ac11000f", UID:"8de55535-690b-11ea-99e8-0242ac110002", ResourceVersion:"491012", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720127693, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"432469583"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pfbv5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000529780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pfbv5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002476738), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002053560), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002476c70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002476c90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002476c98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002476c9c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720127693, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720127695, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720127695, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720127693, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.57", StartTime:(*v1.Time)(0xc001a37880), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001a378a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://67f20a404783dad0243a199935b7f4a9860ac34dd2f8541d30822dff11fcbc2c"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 18 11:28:22.508: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:28:22.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ps4ql" for this suite. Mar 18 11:28:28.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:28:28.560: INFO: namespace: e2e-tests-pods-ps4ql, resource: bindings, ignored listing per whitelist Mar 18 11:28:28.606: INFO: namespace e2e-tests-pods-ps4ql deletion completed in 6.091003056s • [SLOW TEST:15.296 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:28:28.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:28:28.712: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:28:32.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-frcdd" for this suite. Mar 18 11:29:10.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:29:10.822: INFO: namespace: e2e-tests-pods-frcdd, resource: bindings, ignored listing per whitelist Mar 18 11:29:10.889: INFO: namespace e2e-tests-pods-frcdd deletion completed in 38.125427547s • [SLOW TEST:42.282 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:29:10.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4dfp9 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 11:29:10.964: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 11:29:39.104: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.238:8080/dial?request=hostName&protocol=udp&host=10.244.1.58&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-4dfp9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 11:29:39.104: INFO: >>> kubeConfig: /root/.kube/config I0318 11:29:39.140537 6 log.go:172] (0xc0017d84d0) (0xc0008d4aa0) Create stream I0318 11:29:39.140577 6 log.go:172] (0xc0017d84d0) (0xc0008d4aa0) Stream added, broadcasting: 1 I0318 11:29:39.153891 6 log.go:172] (0xc0017d84d0) Reply frame received for 1 I0318 11:29:39.153939 6 log.go:172] (0xc0017d84d0) (0xc0025a61e0) Create stream I0318 11:29:39.153953 6 log.go:172] (0xc0017d84d0) (0xc0025a61e0) Stream added, broadcasting: 3 I0318 11:29:39.155013 6 log.go:172] (0xc0017d84d0) Reply frame received for 3 I0318 11:29:39.155056 6 log.go:172] (0xc0017d84d0) (0xc0008d4be0) Create stream I0318 11:29:39.155067 6 log.go:172] (0xc0017d84d0) (0xc0008d4be0) Stream added, broadcasting: 5 I0318 11:29:39.155786 6 log.go:172] (0xc0017d84d0) Reply frame received for 5 I0318 11:29:39.249057 6 log.go:172] (0xc0017d84d0) Data frame received for 3 I0318 11:29:39.249083 6 log.go:172] (0xc0025a61e0) (3) Data frame handling I0318 11:29:39.249103 6 log.go:172] (0xc0025a61e0) (3) Data frame sent I0318 11:29:39.250022 6 log.go:172] (0xc0017d84d0) Data frame received for 3 I0318 11:29:39.250064 6 log.go:172] (0xc0025a61e0) (3) Data frame handling I0318 11:29:39.250093 6 log.go:172] (0xc0017d84d0) Data frame received for 5 I0318 11:29:39.250106 6 log.go:172] (0xc0008d4be0) (5) Data frame handling I0318 11:29:39.251646 6 log.go:172] (0xc0017d84d0) Data frame received for 1 I0318 11:29:39.251687 6 log.go:172] (0xc0008d4aa0) (1) Data frame handling I0318 11:29:39.251724 6 log.go:172] (0xc0008d4aa0) (1) Data frame sent I0318 11:29:39.251761 6 log.go:172] (0xc0017d84d0) (0xc0008d4aa0) Stream removed, broadcasting: 1 I0318 11:29:39.251850 6 log.go:172] (0xc0017d84d0) (0xc0008d4aa0) Stream removed, broadcasting: 1 I0318 11:29:39.251869 6 log.go:172] (0xc0017d84d0) (0xc0025a61e0) Stream removed, broadcasting: 3 I0318 11:29:39.251957 6 log.go:172] (0xc0017d84d0) Go away received I0318 11:29:39.252019 6 log.go:172] (0xc0017d84d0) (0xc0008d4be0) Stream removed, broadcasting: 5 Mar 18 11:29:39.252: INFO: Waiting for endpoints: map[] Mar 18 11:29:39.255: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.238:8080/dial?request=hostName&protocol=udp&host=10.244.2.237&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-4dfp9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 11:29:39.255: INFO: >>> kubeConfig: /root/.kube/config I0318 11:29:39.285551 6 log.go:172] (0xc00079f290) (0xc0025a6640) Create stream I0318 11:29:39.285661 6 log.go:172] (0xc00079f290) (0xc0025a6640) Stream added, broadcasting: 1 I0318 11:29:39.288453 6 log.go:172] (0xc00079f290) Reply frame received for 1 I0318 11:29:39.288520 6 log.go:172] (0xc00079f290) (0xc001f743c0) Create stream I0318 11:29:39.288544 6 log.go:172] (0xc00079f290) (0xc001f743c0) Stream added, broadcasting: 3 I0318 11:29:39.289700 6 log.go:172] (0xc00079f290) Reply frame received for 3 I0318 11:29:39.289786 6 log.go:172] (0xc00079f290) (0xc0008d4c80) Create stream I0318 11:29:39.289815 6 log.go:172] (0xc00079f290) (0xc0008d4c80) Stream added, broadcasting: 5 I0318 11:29:39.290718 6 log.go:172] (0xc00079f290) Reply frame received for 5 I0318 11:29:39.360576 6 log.go:172] (0xc00079f290) Data frame received for 3 I0318 11:29:39.360605 6 log.go:172] (0xc001f743c0) (3) Data frame handling I0318 11:29:39.360620 6 log.go:172] (0xc001f743c0) (3) Data frame sent I0318 11:29:39.361391 6 log.go:172] (0xc00079f290) Data frame received for 3 I0318 11:29:39.361406 6 log.go:172] (0xc001f743c0) (3) Data frame handling I0318 11:29:39.361546 6 log.go:172] (0xc00079f290) Data frame received for 5 I0318 11:29:39.361618 6 log.go:172] (0xc0008d4c80) (5) Data frame handling I0318 11:29:39.363292 6 log.go:172] (0xc00079f290) Data frame received for 1 I0318 11:29:39.363312 6 log.go:172] (0xc0025a6640) (1) Data frame handling I0318 11:29:39.363329 6 log.go:172] (0xc0025a6640) (1) Data frame sent I0318 11:29:39.363342 6 log.go:172] (0xc00079f290) (0xc0025a6640) Stream removed, broadcasting: 1 I0318 11:29:39.363480 6 log.go:172] (0xc00079f290) (0xc0025a6640) Stream removed, broadcasting: 1 I0318 11:29:39.363494 6 log.go:172] (0xc00079f290) (0xc001f743c0) Stream removed, broadcasting: 3 I0318 11:29:39.363525 6 log.go:172] (0xc00079f290) (0xc0008d4c80) Stream removed, broadcasting: 5 I0318 11:29:39.363576 6 log.go:172] (0xc00079f290) Go away received Mar 18 11:29:39.363: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:29:39.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-4dfp9" for this suite. Mar 18 11:30:03.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:30:03.463: INFO: namespace: e2e-tests-pod-network-test-4dfp9, resource: bindings, ignored listing per whitelist Mar 18 11:30:03.491: INFO: namespace e2e-tests-pod-network-test-4dfp9 deletion completed in 24.123722144s • [SLOW TEST:52.602 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:30:03.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 18 11:30:03.666: INFO: Waiting up to 5m0s for pod "pod-cf95ccbc-690b-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-nf592" to be "success or failure" Mar 18 11:30:03.717: INFO: Pod "pod-cf95ccbc-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.796394ms Mar 18 11:30:05.720: INFO: Pod "pod-cf95ccbc-690b-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054159286s Mar 18 11:30:07.724: INFO: Pod "pod-cf95ccbc-690b-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057778488s STEP: Saw pod success Mar 18 11:30:07.724: INFO: Pod "pod-cf95ccbc-690b-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:30:07.727: INFO: Trying to get logs from node hunter-worker pod pod-cf95ccbc-690b-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:30:07.743: INFO: Waiting for pod pod-cf95ccbc-690b-11ea-9856-0242ac11000f to disappear Mar 18 11:30:07.747: INFO: Pod pod-cf95ccbc-690b-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:30:07.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nf592" for this suite. Mar 18 11:30:13.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:30:13.800: INFO: namespace: e2e-tests-emptydir-nf592, resource: bindings, ignored listing per whitelist Mar 18 11:30:13.857: INFO: namespace e2e-tests-emptydir-nf592 deletion completed in 6.107018886s • [SLOW TEST:10.365 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:30:13.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:30:13.973: INFO: Creating ReplicaSet my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f Mar 18 11:30:14.014: INFO: Pod name my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f: Found 0 pods out of 1 Mar 18 11:30:19.017: INFO: Pod name my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f: Found 1 pods out of 1 Mar 18 11:30:19.017: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f" is running Mar 18 11:30:19.020: INFO: Pod "my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f-nf6qs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:30:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:30:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:30:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:30:14 +0000 UTC Reason: Message:}]) Mar 18 11:30:19.020: INFO: Trying to dial the pod Mar 18 11:30:24.031: INFO: Controller my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f-nf6qs]: "my-hostname-basic-d5bd5617-690b-11ea-9856-0242ac11000f-nf6qs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:30:24.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-hmdt9" for this suite. Mar 18 11:30:30.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:30:30.080: INFO: namespace: e2e-tests-replicaset-hmdt9, resource: bindings, ignored listing per whitelist Mar 18 11:30:30.130: INFO: namespace e2e-tests-replicaset-hmdt9 deletion completed in 6.095360188s • [SLOW TEST:16.273 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:30:30.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 18 11:30:30.264: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ctlrs,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctlrs/configmaps/e2e-watch-test-label-changed,UID:df6f7614-690b-11ea-99e8-0242ac110002,ResourceVersion:491436,Generation:0,CreationTimestamp:2020-03-18 11:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 11:30:30.264: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ctlrs,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctlrs/configmaps/e2e-watch-test-label-changed,UID:df6f7614-690b-11ea-99e8-0242ac110002,ResourceVersion:491437,Generation:0,CreationTimestamp:2020-03-18 11:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 18 11:30:30.265: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ctlrs,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctlrs/configmaps/e2e-watch-test-label-changed,UID:df6f7614-690b-11ea-99e8-0242ac110002,ResourceVersion:491438,Generation:0,CreationTimestamp:2020-03-18 11:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 18 11:30:40.295: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ctlrs,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctlrs/configmaps/e2e-watch-test-label-changed,UID:df6f7614-690b-11ea-99e8-0242ac110002,ResourceVersion:491459,Generation:0,CreationTimestamp:2020-03-18 11:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 11:30:40.295: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ctlrs,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctlrs/configmaps/e2e-watch-test-label-changed,UID:df6f7614-690b-11ea-99e8-0242ac110002,ResourceVersion:491460,Generation:0,CreationTimestamp:2020-03-18 11:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 18 11:30:40.296: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ctlrs,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctlrs/configmaps/e2e-watch-test-label-changed,UID:df6f7614-690b-11ea-99e8-0242ac110002,ResourceVersion:491461,Generation:0,CreationTimestamp:2020-03-18 11:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:30:40.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ctlrs" for this suite. Mar 18 11:30:46.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:30:46.380: INFO: namespace: e2e-tests-watch-ctlrs, resource: bindings, ignored listing per whitelist Mar 18 11:30:46.389: INFO: namespace e2e-tests-watch-ctlrs deletion completed in 6.088843163s • [SLOW TEST:16.258 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:30:46.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:30:50.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5qng4" for this suite. Mar 18 11:31:28.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:31:28.610: INFO: namespace: e2e-tests-kubelet-test-5qng4, resource: bindings, ignored listing per whitelist Mar 18 11:31:28.656: INFO: namespace e2e-tests-kubelet-test-5qng4 deletion completed in 38.112031208s • [SLOW TEST:42.267 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:31:28.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 18 11:31:28.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:29.001: INFO: stderr: "" Mar 18 11:31:29.001: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 11:31:29.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:29.126: INFO: stderr: "" Mar 18 11:31:29.126: INFO: stdout: "update-demo-nautilus-28whx update-demo-nautilus-7z98g " Mar 18 11:31:29.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28whx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:29.268: INFO: stderr: "" Mar 18 11:31:29.268: INFO: stdout: "" Mar 18 11:31:29.268: INFO: update-demo-nautilus-28whx is created but not running Mar 18 11:31:34.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:34.364: INFO: stderr: "" Mar 18 11:31:34.364: INFO: stdout: "update-demo-nautilus-28whx update-demo-nautilus-7z98g " Mar 18 11:31:34.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28whx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:34.450: INFO: stderr: "" Mar 18 11:31:34.450: INFO: stdout: "true" Mar 18 11:31:34.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28whx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:34.552: INFO: stderr: "" Mar 18 11:31:34.552: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:31:34.552: INFO: validating pod update-demo-nautilus-28whx Mar 18 11:31:34.557: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:31:34.557: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:31:34.557: INFO: update-demo-nautilus-28whx is verified up and running Mar 18 11:31:34.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:34.664: INFO: stderr: "" Mar 18 11:31:34.664: INFO: stdout: "true" Mar 18 11:31:34.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:34.783: INFO: stderr: "" Mar 18 11:31:34.783: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:31:34.783: INFO: validating pod update-demo-nautilus-7z98g Mar 18 11:31:34.787: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:31:34.787: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:31:34.787: INFO: update-demo-nautilus-7z98g is verified up and running STEP: scaling down the replication controller Mar 18 11:31:34.789: INFO: scanned /root for discovery docs: Mar 18 11:31:34.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:35.929: INFO: stderr: "" Mar 18 11:31:35.929: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 11:31:35.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:36.040: INFO: stderr: "" Mar 18 11:31:36.040: INFO: stdout: "update-demo-nautilus-28whx update-demo-nautilus-7z98g " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 18 11:31:41.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:41.140: INFO: stderr: "" Mar 18 11:31:41.140: INFO: stdout: "update-demo-nautilus-28whx update-demo-nautilus-7z98g " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 18 11:31:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:46.233: INFO: stderr: "" Mar 18 11:31:46.233: INFO: stdout: "update-demo-nautilus-7z98g " Mar 18 11:31:46.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:46.337: INFO: stderr: "" Mar 18 11:31:46.337: INFO: stdout: "true" Mar 18 11:31:46.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:46.434: INFO: stderr: "" Mar 18 11:31:46.434: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:31:46.434: INFO: validating pod update-demo-nautilus-7z98g Mar 18 11:31:46.437: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:31:46.437: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:31:46.437: INFO: update-demo-nautilus-7z98g is verified up and running STEP: scaling up the replication controller Mar 18 11:31:46.439: INFO: scanned /root for discovery docs: Mar 18 11:31:46.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:47.562: INFO: stderr: "" Mar 18 11:31:47.562: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 11:31:47.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:47.682: INFO: stderr: "" Mar 18 11:31:47.682: INFO: stdout: "update-demo-nautilus-7z98g update-demo-nautilus-8gqgj " Mar 18 11:31:47.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:47.820: INFO: stderr: "" Mar 18 11:31:47.820: INFO: stdout: "true" Mar 18 11:31:47.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:47.916: INFO: stderr: "" Mar 18 11:31:47.916: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:31:47.916: INFO: validating pod update-demo-nautilus-7z98g Mar 18 11:31:47.920: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:31:47.920: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:31:47.920: INFO: update-demo-nautilus-7z98g is verified up and running Mar 18 11:31:47.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gqgj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:48.015: INFO: stderr: "" Mar 18 11:31:48.015: INFO: stdout: "" Mar 18 11:31:48.015: INFO: update-demo-nautilus-8gqgj is created but not running Mar 18 11:31:53.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.173: INFO: stderr: "" Mar 18 11:31:55.173: INFO: stdout: "update-demo-nautilus-7z98g update-demo-nautilus-8gqgj " Mar 18 11:31:55.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.268: INFO: stderr: "" Mar 18 11:31:55.268: INFO: stdout: "true" Mar 18 11:31:55.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7z98g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.370: INFO: stderr: "" Mar 18 11:31:55.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:31:55.370: INFO: validating pod update-demo-nautilus-7z98g Mar 18 11:31:55.373: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:31:55.373: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:31:55.373: INFO: update-demo-nautilus-7z98g is verified up and running Mar 18 11:31:55.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gqgj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.464: INFO: stderr: "" Mar 18 11:31:55.464: INFO: stdout: "true" Mar 18 11:31:55.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gqgj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.554: INFO: stderr: "" Mar 18 11:31:55.554: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 11:31:55.554: INFO: validating pod update-demo-nautilus-8gqgj Mar 18 11:31:55.558: INFO: got data: { "image": "nautilus.jpg" } Mar 18 11:31:55.558: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 11:31:55.558: INFO: update-demo-nautilus-8gqgj is verified up and running STEP: using delete to clean up resources Mar 18 11:31:55.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.682: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 11:31:55.682: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 18 11:31:55.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-n4w5h' Mar 18 11:31:55.894: INFO: stderr: "No resources found.\n" Mar 18 11:31:55.894: INFO: stdout: "" Mar 18 11:31:55.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-n4w5h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 11:31:55.996: INFO: stderr: "" Mar 18 11:31:55.996: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:31:55.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n4w5h" for this suite. Mar 18 11:32:18.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:32:18.195: INFO: namespace: e2e-tests-kubectl-n4w5h, resource: bindings, ignored listing per whitelist Mar 18 11:32:18.218: INFO: namespace e2e-tests-kubectl-n4w5h deletion completed in 22.143493027s • [SLOW TEST:49.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:32:18.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-1fd81bd5-690c-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:32:18.328: INFO: Waiting up to 5m0s for pod "pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-8b9bv" to be "success or failure" Mar 18 11:32:18.330: INFO: Pod "pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793838ms Mar 18 11:32:20.335: INFO: Pod "pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007149657s Mar 18 11:32:22.339: INFO: Pod "pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011647404s STEP: Saw pod success Mar 18 11:32:22.339: INFO: Pod "pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:32:22.342: INFO: Trying to get logs from node hunter-worker pod pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:32:22.376: INFO: Waiting for pod pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f to disappear Mar 18 11:32:22.391: INFO: Pod pod-secrets-1fda18ad-690c-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:32:22.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8b9bv" for this suite. Mar 18 11:32:28.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:32:28.447: INFO: namespace: e2e-tests-secrets-8b9bv, resource: bindings, ignored listing per whitelist Mar 18 11:32:28.496: INFO: namespace e2e-tests-secrets-8b9bv deletion completed in 6.088721359s • [SLOW TEST:10.278 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:32:28.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-25fa3bd9-690c-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:32:28.615: INFO: Waiting up to 5m0s for pod "pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-lpqsz" to be "success or failure" Mar 18 11:32:28.619: INFO: Pod "pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119598ms Mar 18 11:32:30.623: INFO: Pod "pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008292776s Mar 18 11:32:32.627: INFO: Pod "pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012545979s STEP: Saw pod success Mar 18 11:32:32.627: INFO: Pod "pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:32:32.631: INFO: Trying to get logs from node hunter-worker pod pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:32:32.666: INFO: Waiting for pod pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f to disappear Mar 18 11:32:32.679: INFO: Pod pod-secrets-25fcdf66-690c-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:32:32.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lpqsz" for this suite. Mar 18 11:32:38.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:32:38.746: INFO: namespace: e2e-tests-secrets-lpqsz, resource: bindings, ignored listing per whitelist Mar 18 11:32:38.776: INFO: namespace e2e-tests-secrets-lpqsz deletion completed in 6.093847544s • [SLOW TEST:10.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:32:38.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 18 11:32:38.915: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-k4wb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-k4wb7/configmaps/e2e-watch-test-watch-closed,UID:2c1ec25d-690c-11ea-99e8-0242ac110002,ResourceVersion:491852,Generation:0,CreationTimestamp:2020-03-18 11:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 11:32:38.915: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-k4wb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-k4wb7/configmaps/e2e-watch-test-watch-closed,UID:2c1ec25d-690c-11ea-99e8-0242ac110002,ResourceVersion:491853,Generation:0,CreationTimestamp:2020-03-18 11:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 18 11:32:38.926: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-k4wb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-k4wb7/configmaps/e2e-watch-test-watch-closed,UID:2c1ec25d-690c-11ea-99e8-0242ac110002,ResourceVersion:491854,Generation:0,CreationTimestamp:2020-03-18 11:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 11:32:38.926: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-k4wb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-k4wb7/configmaps/e2e-watch-test-watch-closed,UID:2c1ec25d-690c-11ea-99e8-0242ac110002,ResourceVersion:491855,Generation:0,CreationTimestamp:2020-03-18 11:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:32:38.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-k4wb7" for this suite. Mar 18 11:32:44.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:32:44.983: INFO: namespace: e2e-tests-watch-k4wb7, resource: bindings, ignored listing per whitelist Mar 18 11:32:45.038: INFO: namespace e2e-tests-watch-k4wb7 deletion completed in 6.107649088s • [SLOW TEST:6.262 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:32:45.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 18 11:32:45.182: INFO: Waiting up to 5m0s for pod "pod-2fd4370c-690c-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-lj2r9" to be "success or failure" Mar 18 11:32:45.193: INFO: Pod "pod-2fd4370c-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.892834ms Mar 18 11:32:47.197: INFO: Pod "pod-2fd4370c-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014821797s Mar 18 11:32:49.201: INFO: Pod "pod-2fd4370c-690c-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018078892s STEP: Saw pod success Mar 18 11:32:49.201: INFO: Pod "pod-2fd4370c-690c-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:32:49.216: INFO: Trying to get logs from node hunter-worker pod pod-2fd4370c-690c-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:32:49.242: INFO: Waiting for pod pod-2fd4370c-690c-11ea-9856-0242ac11000f to disappear Mar 18 11:32:49.247: INFO: Pod pod-2fd4370c-690c-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:32:49.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lj2r9" for this suite. Mar 18 11:32:55.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:32:55.276: INFO: namespace: e2e-tests-emptydir-lj2r9, resource: bindings, ignored listing per whitelist Mar 18 11:32:55.337: INFO: namespace e2e-tests-emptydir-lj2r9 deletion completed in 6.087273672s • [SLOW TEST:10.299 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:32:55.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 18 11:32:55.963: INFO: Waiting up to 5m0s for pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2" in namespace "e2e-tests-svcaccounts-b4v7v" to be "success or failure" Mar 18 11:32:55.980: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.473796ms Mar 18 11:32:57.984: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020712522s Mar 18 11:32:59.988: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024597543s Mar 18 11:33:02.000: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036908855s STEP: Saw pod success Mar 18 11:33:02.000: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2" satisfied condition "success or failure" Mar 18 11:33:02.003: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2 container token-test: STEP: delete the pod Mar 18 11:33:02.034: INFO: Waiting for pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2 to disappear Mar 18 11:33:02.039: INFO: Pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-snfj2 no longer exists STEP: Creating a pod to test consume service account root CA Mar 18 11:33:02.042: INFO: Waiting up to 5m0s for pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz" in namespace "e2e-tests-svcaccounts-b4v7v" to be "success or failure" Mar 18 11:33:02.045: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582989ms Mar 18 11:33:04.049: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006862234s Mar 18 11:33:06.053: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010693443s Mar 18 11:33:08.057: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014717988s STEP: Saw pod success Mar 18 11:33:08.057: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz" satisfied condition "success or failure" Mar 18 11:33:08.059: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz container root-ca-test: STEP: delete the pod Mar 18 11:33:08.092: INFO: Waiting for pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz to disappear Mar 18 11:33:08.098: INFO: Pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-s6hdz no longer exists STEP: Creating a pod to test consume service account namespace Mar 18 11:33:08.101: INFO: Waiting up to 5m0s for pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh" in namespace "e2e-tests-svcaccounts-b4v7v" to be "success or failure" Mar 18 11:33:08.132: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh": Phase="Pending", Reason="", readiness=false. Elapsed: 30.816772ms Mar 18 11:33:10.136: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034941166s Mar 18 11:33:12.140: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038642917s Mar 18 11:33:14.144: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043226504s STEP: Saw pod success Mar 18 11:33:14.144: INFO: Pod "pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh" satisfied condition "success or failure" Mar 18 11:33:14.147: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh container namespace-test: STEP: delete the pod Mar 18 11:33:14.230: INFO: Waiting for pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh to disappear Mar 18 11:33:14.242: INFO: Pod pod-service-account-364a0b25-690c-11ea-9856-0242ac11000f-lp8hh no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:33:14.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-b4v7v" for this suite. Mar 18 11:33:20.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:33:20.355: INFO: namespace: e2e-tests-svcaccounts-b4v7v, resource: bindings, ignored listing per whitelist Mar 18 11:33:20.396: INFO: namespace e2e-tests-svcaccounts-b4v7v deletion completed in 6.150327624s • [SLOW TEST:25.059 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:33:20.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-hng72 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hng72 to expose endpoints map[] Mar 18 11:33:20.567: INFO: Get endpoints failed (2.958438ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 18 11:33:21.571: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hng72 exposes endpoints map[] (1.006872946s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-hng72 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hng72 to expose endpoints map[pod1:[100]] Mar 18 11:33:24.609: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hng72 exposes endpoints map[pod1:[100]] (3.031363421s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-hng72 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hng72 to expose endpoints map[pod1:[100] pod2:[101]] Mar 18 11:33:27.691: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hng72 exposes endpoints map[pod1:[100] pod2:[101]] (3.078867678s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-hng72 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hng72 to expose endpoints map[pod2:[101]] Mar 18 11:33:27.739: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hng72 exposes endpoints map[pod2:[101]] (41.284836ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-hng72 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hng72 to expose endpoints map[] Mar 18 11:33:28.760: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hng72 exposes endpoints map[] (1.016647765s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:33:28.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-hng72" for this suite. Mar 18 11:33:50.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:33:50.874: INFO: namespace: e2e-tests-services-hng72, resource: bindings, ignored listing per whitelist Mar 18 11:33:50.923: INFO: namespace e2e-tests-services-hng72 deletion completed in 22.086746382s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:30.527 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:33:50.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 18 11:33:51.057: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:33:51.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4qpvl" for this suite. Mar 18 11:33:57.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:33:57.221: INFO: namespace: e2e-tests-kubectl-4qpvl, resource: bindings, ignored listing per whitelist Mar 18 11:33:57.300: INFO: namespace e2e-tests-kubectl-4qpvl deletion completed in 6.15222301s • [SLOW TEST:6.376 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:33:57.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 18 11:33:57.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492191,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 11:33:57.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492191,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 18 11:34:07.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492211,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 18 11:34:07.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492211,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 18 11:34:17.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492230,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 11:34:17.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492230,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 18 11:34:27.423: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492250,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 11:34:27.424: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-a,UID:5ae81454-690c-11ea-99e8-0242ac110002,ResourceVersion:492250,Generation:0,CreationTimestamp:2020-03-18 11:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 18 11:34:37.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-b,UID:72c500b4-690c-11ea-99e8-0242ac110002,ResourceVersion:492270,Generation:0,CreationTimestamp:2020-03-18 11:34:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 11:34:37.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-b,UID:72c500b4-690c-11ea-99e8-0242ac110002,ResourceVersion:492270,Generation:0,CreationTimestamp:2020-03-18 11:34:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 18 11:34:47.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-b,UID:72c500b4-690c-11ea-99e8-0242ac110002,ResourceVersion:492290,Generation:0,CreationTimestamp:2020-03-18 11:34:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 11:34:47.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d6bgh,SelfLink:/api/v1/namespaces/e2e-tests-watch-d6bgh/configmaps/e2e-watch-test-configmap-b,UID:72c500b4-690c-11ea-99e8-0242ac110002,ResourceVersion:492290,Generation:0,CreationTimestamp:2020-03-18 11:34:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:34:57.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-d6bgh" for this suite. Mar 18 11:35:03.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:35:03.496: INFO: namespace: e2e-tests-watch-d6bgh, resource: bindings, ignored listing per whitelist Mar 18 11:35:03.553: INFO: namespace e2e-tests-watch-d6bgh deletion completed in 6.108492074s • [SLOW TEST:66.252 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:35:03.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 18 11:35:03.711: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 18 11:35:08.714: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:35:09.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-qsqjq" for this suite. Mar 18 11:35:15.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:35:15.811: INFO: namespace: e2e-tests-replication-controller-qsqjq, resource: bindings, ignored listing per whitelist Mar 18 11:35:15.831: INFO: namespace e2e-tests-replication-controller-qsqjq deletion completed in 6.087709841s • [SLOW TEST:12.277 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:35:15.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-89b93255-690c-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:35:15.958: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-c7kr7" to be "success or failure" Mar 18 11:35:16.039: INFO: Pod "pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 80.085561ms Mar 18 11:35:18.043: INFO: Pod "pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084234336s Mar 18 11:35:20.047: INFO: Pod "pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088286447s STEP: Saw pod success Mar 18 11:35:20.047: INFO: Pod "pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:35:20.050: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 18 11:35:20.080: INFO: Waiting for pod pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f to disappear Mar 18 11:35:20.084: INFO: Pod pod-projected-secrets-89bb74ef-690c-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:35:20.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c7kr7" for this suite. Mar 18 11:35:26.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:35:26.147: INFO: namespace: e2e-tests-projected-c7kr7, resource: bindings, ignored listing per whitelist Mar 18 11:35:26.181: INFO: namespace e2e-tests-projected-c7kr7 deletion completed in 6.093736699s • [SLOW TEST:10.350 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:35:26.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:35:26.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-t89bs" to be "success or failure" Mar 18 11:35:26.356: INFO: Pod "downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 65.961166ms Mar 18 11:35:28.361: INFO: Pod "downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071347709s Mar 18 11:35:30.365: INFO: Pod "downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075242005s STEP: Saw pod success Mar 18 11:35:30.365: INFO: Pod "downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:35:30.368: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:35:30.385: INFO: Waiting for pod downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f to disappear Mar 18 11:35:30.390: INFO: Pod downwardapi-volume-8fe4266e-690c-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:35:30.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t89bs" for this suite. Mar 18 11:35:36.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:35:36.473: INFO: namespace: e2e-tests-downward-api-t89bs, resource: bindings, ignored listing per whitelist Mar 18 11:35:36.486: INFO: namespace e2e-tests-downward-api-t89bs deletion completed in 6.093156257s • [SLOW TEST:10.305 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:35:36.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 11:35:36.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hk6h7' Mar 18 11:35:36.672: INFO: stderr: "" Mar 18 11:35:36.672: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 18 11:35:36.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hk6h7' Mar 18 11:35:41.334: INFO: stderr: "" Mar 18 11:35:41.334: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:35:41.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hk6h7" for this suite. Mar 18 11:35:47.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:35:47.365: INFO: namespace: e2e-tests-kubectl-hk6h7, resource: bindings, ignored listing per whitelist Mar 18 11:35:47.440: INFO: namespace e2e-tests-kubectl-hk6h7 deletion completed in 6.102807533s • [SLOW TEST:10.954 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:35:47.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 11:35:47.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-zzwff' Mar 18 11:35:47.607: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 11:35:47.607: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 18 11:35:47.613: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 18 11:35:47.628: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 18 11:35:47.669: INFO: scanned /root for discovery docs: Mar 18 11:35:47.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-zzwff' Mar 18 11:36:03.538: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 18 11:36:03.538: INFO: stdout: "Created e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb\nScaling up e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 18 11:36:03.538: INFO: stdout: "Created e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb\nScaling up e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 18 11:36:03.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zzwff' Mar 18 11:36:03.650: INFO: stderr: "" Mar 18 11:36:03.650: INFO: stdout: "e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb-wxjmc " Mar 18 11:36:03.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb-wxjmc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zzwff' Mar 18 11:36:03.756: INFO: stderr: "" Mar 18 11:36:03.756: INFO: stdout: "true" Mar 18 11:36:03.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb-wxjmc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zzwff' Mar 18 11:36:03.871: INFO: stderr: "" Mar 18 11:36:03.871: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 18 11:36:03.871: INFO: e2e-test-nginx-rc-8c020c224c6c331c8f6a742664b0c3eb-wxjmc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 18 11:36:03.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zzwff' Mar 18 11:36:03.978: INFO: stderr: "" Mar 18 11:36:03.978: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:36:03.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zzwff" for this suite. Mar 18 11:36:10.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:36:10.034: INFO: namespace: e2e-tests-kubectl-zzwff, resource: bindings, ignored listing per whitelist Mar 18 11:36:10.090: INFO: namespace e2e-tests-kubectl-zzwff deletion completed in 6.092642988s • [SLOW TEST:22.650 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:36:10.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0318 11:36:20.212842 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 11:36:20.212: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:36:20.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9klxb" for this suite. Mar 18 11:36:26.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:36:26.260: INFO: namespace: e2e-tests-gc-9klxb, resource: bindings, ignored listing per whitelist Mar 18 11:36:26.309: INFO: namespace e2e-tests-gc-9klxb deletion completed in 6.092883178s • [SLOW TEST:16.218 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:36:26.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:36:26.433: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 18 11:36:31.438: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 11:36:31.438: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 18 11:36:31.514: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-pgqkp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pgqkp/deployments/test-cleanup-deployment,UID:b6bee8a2-690c-11ea-99e8-0242ac110002,ResourceVersion:492763,Generation:1,CreationTimestamp:2020-03-18 11:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 18 11:36:31.521: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 18 11:36:31.521: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 18 11:36:31.521: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-pgqkp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pgqkp/replicasets/test-cleanup-controller,UID:b3bcf3aa-690c-11ea-99e8-0242ac110002,ResourceVersion:492764,Generation:1,CreationTimestamp:2020-03-18 11:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b6bee8a2-690c-11ea-99e8-0242ac110002 0xc0009c5437 0xc0009c5438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 18 11:36:31.540: INFO: Pod "test-cleanup-controller-twqsw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-twqsw,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-pgqkp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pgqkp/pods/test-cleanup-controller-twqsw,UID:b3beef2e-690c-11ea-99e8-0242ac110002,ResourceVersion:492758,Generation:0,CreationTimestamp:2020-03-18 11:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller b3bcf3aa-690c-11ea-99e8-0242ac110002 0xc00182ab77 0xc00182ab78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2vwhc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2vwhc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2vwhc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00182abf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00182ac10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:36:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:36:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:36:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:36:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.68,StartTime:2020-03-18 11:36:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 11:36:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4fc4145c5aa753b58fa3f81b0fe2a97bb3c721368c672d2848462afdd9ba7c52}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:36:31.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-pgqkp" for this suite. Mar 18 11:36:37.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:36:37.717: INFO: namespace: e2e-tests-deployment-pgqkp, resource: bindings, ignored listing per whitelist Mar 18 11:36:37.745: INFO: namespace e2e-tests-deployment-pgqkp deletion completed in 6.148200731s • [SLOW TEST:11.436 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:36:37.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ba891f4c-690c-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:36:37.836: INFO: Waiting up to 5m0s for pod "pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-kfvcr" to be "success or failure" Mar 18 11:36:37.871: INFO: Pod "pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.763042ms Mar 18 11:36:39.950: INFO: Pod "pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113133423s Mar 18 11:36:41.954: INFO: Pod "pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117965382s STEP: Saw pod success Mar 18 11:36:41.955: INFO: Pod "pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:36:41.958: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 11:36:41.987: INFO: Waiting for pod pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f to disappear Mar 18 11:36:42.021: INFO: Pod pod-secrets-ba8982ff-690c-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:36:42.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kfvcr" for this suite. Mar 18 11:36:48.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:36:48.112: INFO: namespace: e2e-tests-secrets-kfvcr, resource: bindings, ignored listing per whitelist Mar 18 11:36:48.125: INFO: namespace e2e-tests-secrets-kfvcr deletion completed in 6.100322845s • [SLOW TEST:10.380 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:36:48.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-twsd4 Mar 18 11:36:52.250: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-twsd4 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 11:36:52.253: INFO: Initial restart count of pod liveness-http is 0 Mar 18 11:37:08.286: INFO: Restart count of pod e2e-tests-container-probe-twsd4/liveness-http is now 1 (16.032423027s elapsed) Mar 18 11:37:28.326: INFO: Restart count of pod e2e-tests-container-probe-twsd4/liveness-http is now 2 (36.072525086s elapsed) Mar 18 11:37:46.370: INFO: Restart count of pod e2e-tests-container-probe-twsd4/liveness-http is now 3 (54.116707963s elapsed) Mar 18 11:38:06.417: INFO: Restart count of pod e2e-tests-container-probe-twsd4/liveness-http is now 4 (1m14.164358959s elapsed) Mar 18 11:39:10.577: INFO: Restart count of pod e2e-tests-container-probe-twsd4/liveness-http is now 5 (2m18.323449584s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:39:10.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-twsd4" for this suite. Mar 18 11:39:16.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:39:16.685: INFO: namespace: e2e-tests-container-probe-twsd4, resource: bindings, ignored listing per whitelist Mar 18 11:39:16.742: INFO: namespace e2e-tests-container-probe-twsd4 deletion completed in 6.118210501s • [SLOW TEST:148.616 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:39:16.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-194fc4c4-690d-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 11:39:16.864: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-24pms" to be "success or failure" Mar 18 11:39:16.883: INFO: Pod "pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.94845ms Mar 18 11:39:18.887: INFO: Pod "pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023083003s Mar 18 11:39:20.891: INFO: Pod "pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027490649s STEP: Saw pod success Mar 18 11:39:20.891: INFO: Pod "pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:39:20.895: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 18 11:39:20.915: INFO: Waiting for pod pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:39:20.919: INFO: Pod pod-projected-secrets-19507dc1-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:39:20.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-24pms" for this suite. Mar 18 11:39:26.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:39:26.986: INFO: namespace: e2e-tests-projected-24pms, resource: bindings, ignored listing per whitelist Mar 18 11:39:27.013: INFO: namespace e2e-tests-projected-24pms deletion completed in 6.091676288s • [SLOW TEST:10.271 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:39:27.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-1f6e912d-690d-11ea-9856-0242ac11000f STEP: Creating secret with name s-test-opt-upd-1f6e9171-690d-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1f6e912d-690d-11ea-9856-0242ac11000f STEP: Updating secret s-test-opt-upd-1f6e9171-690d-11ea-9856-0242ac11000f STEP: Creating secret with name s-test-opt-create-1f6e9190-690d-11ea-9856-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:39:35.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hwjtc" for this suite. Mar 18 11:39:57.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:39:57.345: INFO: namespace: e2e-tests-secrets-hwjtc, resource: bindings, ignored listing per whitelist Mar 18 11:39:57.359: INFO: namespace e2e-tests-secrets-hwjtc deletion completed in 22.09162354s • [SLOW TEST:30.346 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:39:57.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:39:57.503: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.810049ms) Mar 18 11:39:57.506: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.283036ms) Mar 18 11:39:57.509: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.888527ms) Mar 18 11:39:57.512: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.455709ms) Mar 18 11:39:57.515: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.158786ms) Mar 18 11:39:57.518: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.983785ms) Mar 18 11:39:57.520: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.63083ms) Mar 18 11:39:57.523: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.88856ms) Mar 18 11:39:57.526: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.989043ms) Mar 18 11:39:57.529: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.708497ms) Mar 18 11:39:57.532: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.754943ms) Mar 18 11:39:57.535: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.103724ms) Mar 18 11:39:57.538: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.319899ms) Mar 18 11:39:57.542: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.341454ms) Mar 18 11:39:57.545: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.346517ms) Mar 18 11:39:57.548: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.110421ms) Mar 18 11:39:57.553: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.522818ms) Mar 18 11:39:57.557: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.616002ms) Mar 18 11:39:57.560: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.125246ms) Mar 18 11:39:57.563: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.348819ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:39:57.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-6kn8z" for this suite. Mar 18 11:40:03.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:40:03.632: INFO: namespace: e2e-tests-proxy-6kn8z, resource: bindings, ignored listing per whitelist Mar 18 11:40:03.663: INFO: namespace e2e-tests-proxy-6kn8z deletion completed in 6.096668238s • [SLOW TEST:6.303 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:40:03.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 18 11:40:03.784: INFO: Waiting up to 5m0s for pod "downward-api-354aa4cf-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-lq9tp" to be "success or failure" Mar 18 11:40:03.794: INFO: Pod "downward-api-354aa4cf-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.919805ms Mar 18 11:40:05.798: INFO: Pod "downward-api-354aa4cf-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013618338s Mar 18 11:40:07.802: INFO: Pod "downward-api-354aa4cf-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017985831s STEP: Saw pod success Mar 18 11:40:07.802: INFO: Pod "downward-api-354aa4cf-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:40:07.806: INFO: Trying to get logs from node hunter-worker2 pod downward-api-354aa4cf-690d-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 11:40:07.829: INFO: Waiting for pod downward-api-354aa4cf-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:40:07.886: INFO: Pod downward-api-354aa4cf-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:40:07.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lq9tp" for this suite. Mar 18 11:40:13.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:40:13.954: INFO: namespace: e2e-tests-downward-api-lq9tp, resource: bindings, ignored listing per whitelist Mar 18 11:40:14.009: INFO: namespace e2e-tests-downward-api-lq9tp deletion completed in 6.119309423s • [SLOW TEST:10.346 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:40:14.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f Mar 18 11:40:14.135: INFO: Pod name my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f: Found 0 pods out of 1 Mar 18 11:40:19.140: INFO: Pod name my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f: Found 1 pods out of 1 Mar 18 11:40:19.140: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f" are running Mar 18 11:40:19.143: INFO: Pod "my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f-vx2jk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:40:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:40:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:40:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 11:40:14 +0000 UTC Reason: Message:}]) Mar 18 11:40:19.143: INFO: Trying to dial the pod Mar 18 11:40:24.168: INFO: Controller my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f-vx2jk]: "my-hostname-basic-3b733542-690d-11ea-9856-0242ac11000f-vx2jk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:40:24.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-knwmk" for this suite. Mar 18 11:40:30.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:40:30.258: INFO: namespace: e2e-tests-replication-controller-knwmk, resource: bindings, ignored listing per whitelist Mar 18 11:40:30.262: INFO: namespace e2e-tests-replication-controller-knwmk deletion completed in 6.089288278s • [SLOW TEST:16.252 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:40:30.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 18 11:40:30.371: INFO: Waiting up to 5m0s for pod "pod-452355bd-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-9r2bm" to be "success or failure" Mar 18 11:40:30.414: INFO: Pod "pod-452355bd-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.554664ms Mar 18 11:40:32.418: INFO: Pod "pod-452355bd-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046485334s Mar 18 11:40:34.422: INFO: Pod "pod-452355bd-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050679718s STEP: Saw pod success Mar 18 11:40:34.422: INFO: Pod "pod-452355bd-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:40:34.425: INFO: Trying to get logs from node hunter-worker pod pod-452355bd-690d-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:40:34.459: INFO: Waiting for pod pod-452355bd-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:40:34.492: INFO: Pod pod-452355bd-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:40:34.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9r2bm" for this suite. Mar 18 11:40:40.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:40:40.547: INFO: namespace: e2e-tests-emptydir-9r2bm, resource: bindings, ignored listing per whitelist Mar 18 11:40:40.591: INFO: namespace e2e-tests-emptydir-9r2bm deletion completed in 6.096174434s • [SLOW TEST:10.329 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:40:40.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:40:40.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-hrn5m" to be "success or failure" Mar 18 11:40:40.703: INFO: Pod "downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.0091ms Mar 18 11:40:42.708: INFO: Pod "downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020420568s Mar 18 11:40:44.712: INFO: Pod "downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024568522s STEP: Saw pod success Mar 18 11:40:44.712: INFO: Pod "downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:40:44.715: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:40:44.775: INFO: Waiting for pod downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:40:44.781: INFO: Pod downwardapi-volume-4b48b709-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:40:44.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hrn5m" for this suite. Mar 18 11:40:50.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:40:50.819: INFO: namespace: e2e-tests-downward-api-hrn5m, resource: bindings, ignored listing per whitelist Mar 18 11:40:50.879: INFO: namespace e2e-tests-downward-api-hrn5m deletion completed in 6.095678563s • [SLOW TEST:10.288 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:40:50.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-plmt STEP: Creating a pod to test atomic-volume-subpath Mar 18 11:40:51.054: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-plmt" in namespace "e2e-tests-subpath-zzbf7" to be "success or failure" Mar 18 11:40:51.065: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.226725ms Mar 18 11:40:53.085: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031034284s Mar 18 11:40:55.088: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03424625s Mar 18 11:40:57.092: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 6.037982232s Mar 18 11:40:59.096: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 8.042461334s Mar 18 11:41:01.100: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 10.046469509s Mar 18 11:41:03.105: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 12.05085129s Mar 18 11:41:05.109: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 14.055277743s Mar 18 11:41:07.113: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 16.059120671s Mar 18 11:41:09.121: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 18.066848791s Mar 18 11:41:11.139: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 20.085118843s Mar 18 11:41:13.143: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Running", Reason="", readiness=false. Elapsed: 22.088826927s Mar 18 11:41:15.147: INFO: Pod "pod-subpath-test-downwardapi-plmt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093091188s STEP: Saw pod success Mar 18 11:41:15.147: INFO: Pod "pod-subpath-test-downwardapi-plmt" satisfied condition "success or failure" Mar 18 11:41:15.150: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-plmt container test-container-subpath-downwardapi-plmt: STEP: delete the pod Mar 18 11:41:15.184: INFO: Waiting for pod pod-subpath-test-downwardapi-plmt to disappear Mar 18 11:41:15.200: INFO: Pod pod-subpath-test-downwardapi-plmt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-plmt Mar 18 11:41:15.200: INFO: Deleting pod "pod-subpath-test-downwardapi-plmt" in namespace "e2e-tests-subpath-zzbf7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:41:15.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zzbf7" for this suite. Mar 18 11:41:21.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:41:21.289: INFO: namespace: e2e-tests-subpath-zzbf7, resource: bindings, ignored listing per whitelist Mar 18 11:41:21.298: INFO: namespace e2e-tests-subpath-zzbf7 deletion completed in 6.089055209s • [SLOW TEST:30.419 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:41:21.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-638de092-690d-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:41:21.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-4lgn7" to be "success or failure" Mar 18 11:41:21.480: INFO: Pod "pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 61.232069ms Mar 18 11:41:23.484: INFO: Pod "pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065198288s Mar 18 11:41:25.492: INFO: Pod "pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073611123s STEP: Saw pod success Mar 18 11:41:25.492: INFO: Pod "pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:41:25.495: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 11:41:25.515: INFO: Waiting for pod pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:41:25.519: INFO: Pod pod-configmaps-639030c6-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:41:25.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4lgn7" for this suite. Mar 18 11:41:31.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:41:31.600: INFO: namespace: e2e-tests-configmap-4lgn7, resource: bindings, ignored listing per whitelist Mar 18 11:41:31.622: INFO: namespace e2e-tests-configmap-4lgn7 deletion completed in 6.09968812s • [SLOW TEST:10.323 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:41:31.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 18 11:41:36.268: INFO: Successfully updated pod "pod-update-activedeadlineseconds-69b68d4b-690d-11ea-9856-0242ac11000f" Mar 18 11:41:36.268: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-69b68d4b-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-pods-pvk9f" to be "terminated due to deadline exceeded" Mar 18 11:41:36.275: INFO: Pod "pod-update-activedeadlineseconds-69b68d4b-690d-11ea-9856-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 6.700477ms Mar 18 11:41:38.279: INFO: Pod "pod-update-activedeadlineseconds-69b68d4b-690d-11ea-9856-0242ac11000f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011001599s Mar 18 11:41:38.279: INFO: Pod "pod-update-activedeadlineseconds-69b68d4b-690d-11ea-9856-0242ac11000f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:41:38.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pvk9f" for this suite. Mar 18 11:41:44.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:41:44.343: INFO: namespace: e2e-tests-pods-pvk9f, resource: bindings, ignored listing per whitelist Mar 18 11:41:44.374: INFO: namespace e2e-tests-pods-pvk9f deletion completed in 6.09064791s • [SLOW TEST:12.752 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:41:44.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:41:44.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-mksmx" to be "success or failure" Mar 18 11:41:44.525: INFO: Pod "downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.400709ms Mar 18 11:41:46.528: INFO: Pod "downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013948888s Mar 18 11:41:48.532: INFO: Pod "downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01789132s STEP: Saw pod success Mar 18 11:41:48.532: INFO: Pod "downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:41:48.535: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:41:48.580: INFO: Waiting for pod downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:41:48.590: INFO: Pod downwardapi-volume-7152dacc-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:41:48.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mksmx" for this suite. Mar 18 11:41:54.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:41:54.621: INFO: namespace: e2e-tests-projected-mksmx, resource: bindings, ignored listing per whitelist Mar 18 11:41:54.696: INFO: namespace e2e-tests-projected-mksmx deletion completed in 6.102019184s • [SLOW TEST:10.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:41:54.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2zqc2 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2zqc2 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2zqc2 Mar 18 11:41:54.827: INFO: Found 0 stateful pods, waiting for 1 Mar 18 11:42:04.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 18 11:42:04.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 11:42:05.078: INFO: stderr: "I0318 11:42:04.964129 2246 log.go:172] (0xc000138580) (0xc0007c35e0) Create stream\nI0318 11:42:04.964196 2246 log.go:172] (0xc000138580) (0xc0007c35e0) Stream added, broadcasting: 1\nI0318 11:42:04.966839 2246 log.go:172] (0xc000138580) Reply frame received for 1\nI0318 11:42:04.966915 2246 log.go:172] (0xc000138580) (0xc000346640) Create stream\nI0318 11:42:04.966944 2246 log.go:172] (0xc000138580) (0xc000346640) Stream added, broadcasting: 3\nI0318 11:42:04.967901 2246 log.go:172] (0xc000138580) Reply frame received for 3\nI0318 11:42:04.967939 2246 log.go:172] (0xc000138580) (0xc00032a000) Create stream\nI0318 11:42:04.967949 2246 log.go:172] (0xc000138580) (0xc00032a000) Stream added, broadcasting: 5\nI0318 11:42:04.968783 2246 log.go:172] (0xc000138580) Reply frame received for 5\nI0318 11:42:05.071393 2246 log.go:172] (0xc000138580) Data frame received for 3\nI0318 11:42:05.071441 2246 log.go:172] (0xc000346640) (3) Data frame handling\nI0318 11:42:05.071528 2246 log.go:172] (0xc000346640) (3) Data frame sent\nI0318 11:42:05.071685 2246 log.go:172] (0xc000138580) Data frame received for 5\nI0318 11:42:05.071725 2246 log.go:172] (0xc00032a000) (5) Data frame handling\nI0318 11:42:05.071775 2246 log.go:172] (0xc000138580) Data frame received for 3\nI0318 11:42:05.071800 2246 log.go:172] (0xc000346640) (3) Data frame handling\nI0318 11:42:05.073594 2246 log.go:172] (0xc000138580) Data frame received for 1\nI0318 11:42:05.073707 2246 log.go:172] (0xc0007c35e0) (1) Data frame handling\nI0318 11:42:05.073764 2246 log.go:172] (0xc0007c35e0) (1) Data frame sent\nI0318 11:42:05.073803 2246 log.go:172] (0xc000138580) (0xc0007c35e0) Stream removed, broadcasting: 1\nI0318 11:42:05.073836 2246 log.go:172] (0xc000138580) Go away received\nI0318 11:42:05.074183 2246 log.go:172] (0xc000138580) (0xc0007c35e0) Stream removed, broadcasting: 1\nI0318 11:42:05.074208 2246 log.go:172] (0xc000138580) (0xc000346640) Stream removed, broadcasting: 3\nI0318 11:42:05.074221 2246 log.go:172] (0xc000138580) (0xc00032a000) Stream removed, broadcasting: 5\n" Mar 18 11:42:05.078: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 11:42:05.078: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 11:42:05.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 18 11:42:15.087: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 11:42:15.087: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 11:42:15.122: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999619s Mar 18 11:42:16.127: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.973986822s Mar 18 11:42:17.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96889339s Mar 18 11:42:18.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964396285s Mar 18 11:42:19.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959560287s Mar 18 11:42:20.146: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.954450194s Mar 18 11:42:21.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.949535894s Mar 18 11:42:22.156: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.944420932s Mar 18 11:42:23.161: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.939610091s Mar 18 11:42:24.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 934.822197ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2zqc2 Mar 18 11:42:25.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 11:42:25.360: INFO: stderr: "I0318 11:42:25.300228 2269 log.go:172] (0xc0001380b0) (0xc000612280) Create stream\nI0318 11:42:25.300309 2269 log.go:172] (0xc0001380b0) (0xc000612280) Stream added, broadcasting: 1\nI0318 11:42:25.303233 2269 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0318 11:42:25.303267 2269 log.go:172] (0xc0001380b0) (0xc000612320) Create stream\nI0318 11:42:25.303288 2269 log.go:172] (0xc0001380b0) (0xc000612320) Stream added, broadcasting: 3\nI0318 11:42:25.304339 2269 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0318 11:42:25.304363 2269 log.go:172] (0xc0001380b0) (0xc000522be0) Create stream\nI0318 11:42:25.304370 2269 log.go:172] (0xc0001380b0) (0xc000522be0) Stream added, broadcasting: 5\nI0318 11:42:25.305550 2269 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0318 11:42:25.355203 2269 log.go:172] (0xc0001380b0) Data frame received for 5\nI0318 11:42:25.355283 2269 log.go:172] (0xc0001380b0) Data frame received for 3\nI0318 11:42:25.355317 2269 log.go:172] (0xc000612320) (3) Data frame handling\nI0318 11:42:25.355342 2269 log.go:172] (0xc000612320) (3) Data frame sent\nI0318 11:42:25.355359 2269 log.go:172] (0xc0001380b0) Data frame received for 3\nI0318 11:42:25.355380 2269 log.go:172] (0xc000612320) (3) Data frame handling\nI0318 11:42:25.355478 2269 log.go:172] (0xc000522be0) (5) Data frame handling\nI0318 11:42:25.356960 2269 log.go:172] (0xc0001380b0) Data frame received for 1\nI0318 11:42:25.356993 2269 log.go:172] (0xc000612280) (1) Data frame handling\nI0318 11:42:25.357014 2269 log.go:172] (0xc000612280) (1) Data frame sent\nI0318 11:42:25.357043 2269 log.go:172] (0xc0001380b0) (0xc000612280) Stream removed, broadcasting: 1\nI0318 11:42:25.357078 2269 log.go:172] (0xc0001380b0) Go away received\nI0318 11:42:25.357460 2269 log.go:172] (0xc0001380b0) (0xc000612280) Stream removed, broadcasting: 1\nI0318 11:42:25.357489 2269 log.go:172] (0xc0001380b0) (0xc000612320) Stream removed, broadcasting: 3\nI0318 11:42:25.357502 2269 log.go:172] (0xc0001380b0) (0xc000522be0) Stream removed, broadcasting: 5\n" Mar 18 11:42:25.360: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 11:42:25.360: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 11:42:25.365: INFO: Found 1 stateful pods, waiting for 3 Mar 18 11:42:35.370: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 11:42:35.370: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 11:42:35.370: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 18 11:42:35.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 11:42:35.552: INFO: stderr: "I0318 11:42:35.487238 2292 log.go:172] (0xc00015c840) (0xc00075a640) Create stream\nI0318 11:42:35.487293 2292 log.go:172] (0xc00015c840) (0xc00075a640) Stream added, broadcasting: 1\nI0318 11:42:35.490119 2292 log.go:172] (0xc00015c840) Reply frame received for 1\nI0318 11:42:35.490181 2292 log.go:172] (0xc00015c840) (0xc000696c80) Create stream\nI0318 11:42:35.490202 2292 log.go:172] (0xc00015c840) (0xc000696c80) Stream added, broadcasting: 3\nI0318 11:42:35.491105 2292 log.go:172] (0xc00015c840) Reply frame received for 3\nI0318 11:42:35.491155 2292 log.go:172] (0xc00015c840) (0xc0006d4000) Create stream\nI0318 11:42:35.491171 2292 log.go:172] (0xc00015c840) (0xc0006d4000) Stream added, broadcasting: 5\nI0318 11:42:35.492201 2292 log.go:172] (0xc00015c840) Reply frame received for 5\nI0318 11:42:35.546528 2292 log.go:172] (0xc00015c840) Data frame received for 5\nI0318 11:42:35.546585 2292 log.go:172] (0xc0006d4000) (5) Data frame handling\nI0318 11:42:35.546639 2292 log.go:172] (0xc00015c840) Data frame received for 3\nI0318 11:42:35.546689 2292 log.go:172] (0xc000696c80) (3) Data frame handling\nI0318 11:42:35.546720 2292 log.go:172] (0xc000696c80) (3) Data frame sent\nI0318 11:42:35.546735 2292 log.go:172] (0xc00015c840) Data frame received for 3\nI0318 11:42:35.546746 2292 log.go:172] (0xc000696c80) (3) Data frame handling\nI0318 11:42:35.548407 2292 log.go:172] (0xc00015c840) Data frame received for 1\nI0318 11:42:35.548431 2292 log.go:172] (0xc00075a640) (1) Data frame handling\nI0318 11:42:35.548444 2292 log.go:172] (0xc00075a640) (1) Data frame sent\nI0318 11:42:35.548459 2292 log.go:172] (0xc00015c840) (0xc00075a640) Stream removed, broadcasting: 1\nI0318 11:42:35.548517 2292 log.go:172] (0xc00015c840) Go away received\nI0318 11:42:35.548749 2292 log.go:172] (0xc00015c840) (0xc00075a640) Stream removed, broadcasting: 1\nI0318 11:42:35.548780 2292 log.go:172] (0xc00015c840) (0xc000696c80) Stream removed, broadcasting: 3\nI0318 11:42:35.548802 2292 log.go:172] (0xc00015c840) (0xc0006d4000) Stream removed, broadcasting: 5\n" Mar 18 11:42:35.552: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 11:42:35.553: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 11:42:35.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 11:42:35.789: INFO: stderr: "I0318 11:42:35.669681 2315 log.go:172] (0xc0007b22c0) (0xc000716640) Create stream\nI0318 11:42:35.669747 2315 log.go:172] (0xc0007b22c0) (0xc000716640) Stream added, broadcasting: 1\nI0318 11:42:35.672447 2315 log.go:172] (0xc0007b22c0) Reply frame received for 1\nI0318 11:42:35.672509 2315 log.go:172] (0xc0007b22c0) (0xc0007166e0) Create stream\nI0318 11:42:35.672524 2315 log.go:172] (0xc0007b22c0) (0xc0007166e0) Stream added, broadcasting: 3\nI0318 11:42:35.673638 2315 log.go:172] (0xc0007b22c0) Reply frame received for 3\nI0318 11:42:35.673663 2315 log.go:172] (0xc0007b22c0) (0xc0005d4e60) Create stream\nI0318 11:42:35.673672 2315 log.go:172] (0xc0007b22c0) (0xc0005d4e60) Stream added, broadcasting: 5\nI0318 11:42:35.674752 2315 log.go:172] (0xc0007b22c0) Reply frame received for 5\nI0318 11:42:35.781952 2315 log.go:172] (0xc0007b22c0) Data frame received for 3\nI0318 11:42:35.781986 2315 log.go:172] (0xc0007166e0) (3) Data frame handling\nI0318 11:42:35.782048 2315 log.go:172] (0xc0007166e0) (3) Data frame sent\nI0318 11:42:35.782086 2315 log.go:172] (0xc0007b22c0) Data frame received for 3\nI0318 11:42:35.782100 2315 log.go:172] (0xc0007166e0) (3) Data frame handling\nI0318 11:42:35.782272 2315 log.go:172] (0xc0007b22c0) Data frame received for 5\nI0318 11:42:35.782293 2315 log.go:172] (0xc0005d4e60) (5) Data frame handling\nI0318 11:42:35.784617 2315 log.go:172] (0xc0007b22c0) Data frame received for 1\nI0318 11:42:35.784650 2315 log.go:172] (0xc000716640) (1) Data frame handling\nI0318 11:42:35.784780 2315 log.go:172] (0xc000716640) (1) Data frame sent\nI0318 11:42:35.784823 2315 log.go:172] (0xc0007b22c0) (0xc000716640) Stream removed, broadcasting: 1\nI0318 11:42:35.785041 2315 log.go:172] (0xc0007b22c0) (0xc000716640) Stream removed, broadcasting: 1\nI0318 11:42:35.785076 2315 log.go:172] (0xc0007b22c0) (0xc0007166e0) Stream removed, broadcasting: 3\nI0318 11:42:35.785578 2315 log.go:172] (0xc0007b22c0) (0xc0005d4e60) Stream removed, broadcasting: 5\nI0318 11:42:35.785676 2315 log.go:172] (0xc0007b22c0) Go away received\n" Mar 18 11:42:35.789: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 11:42:35.789: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 11:42:35.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 11:42:36.034: INFO: stderr: "I0318 11:42:35.911431 2337 log.go:172] (0xc00082e2c0) (0xc000673360) Create stream\nI0318 11:42:35.911481 2337 log.go:172] (0xc00082e2c0) (0xc000673360) Stream added, broadcasting: 1\nI0318 11:42:35.913707 2337 log.go:172] (0xc00082e2c0) Reply frame received for 1\nI0318 11:42:35.913747 2337 log.go:172] (0xc00082e2c0) (0xc000572000) Create stream\nI0318 11:42:35.913759 2337 log.go:172] (0xc00082e2c0) (0xc000572000) Stream added, broadcasting: 3\nI0318 11:42:35.914694 2337 log.go:172] (0xc00082e2c0) Reply frame received for 3\nI0318 11:42:35.914729 2337 log.go:172] (0xc00082e2c0) (0xc00051c000) Create stream\nI0318 11:42:35.914740 2337 log.go:172] (0xc00082e2c0) (0xc00051c000) Stream added, broadcasting: 5\nI0318 11:42:35.915604 2337 log.go:172] (0xc00082e2c0) Reply frame received for 5\nI0318 11:42:36.027847 2337 log.go:172] (0xc00082e2c0) Data frame received for 3\nI0318 11:42:36.027890 2337 log.go:172] (0xc000572000) (3) Data frame handling\nI0318 11:42:36.027935 2337 log.go:172] (0xc00082e2c0) Data frame received for 5\nI0318 11:42:36.027955 2337 log.go:172] (0xc00051c000) (5) Data frame handling\nI0318 11:42:36.027973 2337 log.go:172] (0xc000572000) (3) Data frame sent\nI0318 11:42:36.027994 2337 log.go:172] (0xc00082e2c0) Data frame received for 3\nI0318 11:42:36.028009 2337 log.go:172] (0xc000572000) (3) Data frame handling\nI0318 11:42:36.029767 2337 log.go:172] (0xc00082e2c0) Data frame received for 1\nI0318 11:42:36.029808 2337 log.go:172] (0xc000673360) (1) Data frame handling\nI0318 11:42:36.029830 2337 log.go:172] (0xc000673360) (1) Data frame sent\nI0318 11:42:36.029855 2337 log.go:172] (0xc00082e2c0) (0xc000673360) Stream removed, broadcasting: 1\nI0318 11:42:36.029885 2337 log.go:172] (0xc00082e2c0) Go away received\nI0318 11:42:36.030393 2337 log.go:172] (0xc00082e2c0) (0xc000673360) Stream removed, broadcasting: 1\nI0318 11:42:36.030417 2337 log.go:172] (0xc00082e2c0) (0xc000572000) Stream removed, broadcasting: 3\nI0318 11:42:36.030429 2337 log.go:172] (0xc00082e2c0) (0xc00051c000) Stream removed, broadcasting: 5\n" Mar 18 11:42:36.034: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 11:42:36.034: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 11:42:36.034: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 11:42:36.062: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 18 11:42:46.090: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 11:42:46.090: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 18 11:42:46.090: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 18 11:42:46.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999499s Mar 18 11:42:47.106: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995176705s Mar 18 11:42:48.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990074051s Mar 18 11:42:49.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98468093s Mar 18 11:42:50.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97918467s Mar 18 11:42:51.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974412221s Mar 18 11:42:52.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969556307s Mar 18 11:42:53.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964462073s Mar 18 11:42:54.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959644338s Mar 18 11:42:55.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.026946ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2zqc2 Mar 18 11:42:56.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 11:42:56.384: INFO: stderr: "I0318 11:42:56.281684 2359 log.go:172] (0xc00014c840) (0xc0007a94a0) Create stream\nI0318 11:42:56.281749 2359 log.go:172] (0xc00014c840) (0xc0007a94a0) Stream added, broadcasting: 1\nI0318 11:42:56.284040 2359 log.go:172] (0xc00014c840) Reply frame received for 1\nI0318 11:42:56.284121 2359 log.go:172] (0xc00014c840) (0xc0006ae000) Create stream\nI0318 11:42:56.284160 2359 log.go:172] (0xc00014c840) (0xc0006ae000) Stream added, broadcasting: 3\nI0318 11:42:56.285403 2359 log.go:172] (0xc00014c840) Reply frame received for 3\nI0318 11:42:56.285449 2359 log.go:172] (0xc00014c840) (0xc0006ae0a0) Create stream\nI0318 11:42:56.285463 2359 log.go:172] (0xc00014c840) (0xc0006ae0a0) Stream added, broadcasting: 5\nI0318 11:42:56.286315 2359 log.go:172] (0xc00014c840) Reply frame received for 5\nI0318 11:42:56.377350 2359 log.go:172] (0xc00014c840) Data frame received for 3\nI0318 11:42:56.377400 2359 log.go:172] (0xc0006ae000) (3) Data frame handling\nI0318 11:42:56.377427 2359 log.go:172] (0xc0006ae000) (3) Data frame sent\nI0318 11:42:56.377639 2359 log.go:172] (0xc00014c840) Data frame received for 5\nI0318 11:42:56.377668 2359 log.go:172] (0xc0006ae0a0) (5) Data frame handling\nI0318 11:42:56.377706 2359 log.go:172] (0xc00014c840) Data frame received for 3\nI0318 11:42:56.377749 2359 log.go:172] (0xc0006ae000) (3) Data frame handling\nI0318 11:42:56.379750 2359 log.go:172] (0xc00014c840) Data frame received for 1\nI0318 11:42:56.379768 2359 log.go:172] (0xc0007a94a0) (1) Data frame handling\nI0318 11:42:56.379779 2359 log.go:172] (0xc0007a94a0) (1) Data frame sent\nI0318 11:42:56.379787 2359 log.go:172] (0xc00014c840) (0xc0007a94a0) Stream removed, broadcasting: 1\nI0318 11:42:56.379798 2359 log.go:172] (0xc00014c840) Go away received\nI0318 11:42:56.380143 2359 log.go:172] (0xc00014c840) (0xc0007a94a0) Stream removed, broadcasting: 1\nI0318 11:42:56.380179 2359 log.go:172] (0xc00014c840) (0xc0006ae000) Stream removed, broadcasting: 3\nI0318 11:42:56.380201 2359 log.go:172] (0xc00014c840) (0xc0006ae0a0) Stream removed, broadcasting: 5\n" Mar 18 11:42:56.384: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 11:42:56.384: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 11:42:56.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 11:42:56.577: INFO: stderr: "I0318 11:42:56.504531 2381 log.go:172] (0xc0000f0420) (0xc0005eb360) Create stream\nI0318 11:42:56.504601 2381 log.go:172] (0xc0000f0420) (0xc0005eb360) Stream added, broadcasting: 1\nI0318 11:42:56.507091 2381 log.go:172] (0xc0000f0420) Reply frame received for 1\nI0318 11:42:56.507127 2381 log.go:172] (0xc0000f0420) (0xc00028e000) Create stream\nI0318 11:42:56.507138 2381 log.go:172] (0xc0000f0420) (0xc00028e000) Stream added, broadcasting: 3\nI0318 11:42:56.508260 2381 log.go:172] (0xc0000f0420) Reply frame received for 3\nI0318 11:42:56.508319 2381 log.go:172] (0xc0000f0420) (0xc0006fa000) Create stream\nI0318 11:42:56.508343 2381 log.go:172] (0xc0000f0420) (0xc0006fa000) Stream added, broadcasting: 5\nI0318 11:42:56.509332 2381 log.go:172] (0xc0000f0420) Reply frame received for 5\nI0318 11:42:56.572661 2381 log.go:172] (0xc0000f0420) Data frame received for 5\nI0318 11:42:56.572700 2381 log.go:172] (0xc0006fa000) (5) Data frame handling\nI0318 11:42:56.572723 2381 log.go:172] (0xc0000f0420) Data frame received for 3\nI0318 11:42:56.572730 2381 log.go:172] (0xc00028e000) (3) Data frame handling\nI0318 11:42:56.572740 2381 log.go:172] (0xc00028e000) (3) Data frame sent\nI0318 11:42:56.572749 2381 log.go:172] (0xc0000f0420) Data frame received for 3\nI0318 11:42:56.572755 2381 log.go:172] (0xc00028e000) (3) Data frame handling\nI0318 11:42:56.574296 2381 log.go:172] (0xc0000f0420) Data frame received for 1\nI0318 11:42:56.574317 2381 log.go:172] (0xc0005eb360) (1) Data frame handling\nI0318 11:42:56.574326 2381 log.go:172] (0xc0005eb360) (1) Data frame sent\nI0318 11:42:56.574337 2381 log.go:172] (0xc0000f0420) (0xc0005eb360) Stream removed, broadcasting: 1\nI0318 11:42:56.574344 2381 log.go:172] (0xc0000f0420) Go away received\nI0318 11:42:56.574641 2381 log.go:172] (0xc0000f0420) (0xc0005eb360) Stream removed, broadcasting: 1\nI0318 11:42:56.574682 2381 log.go:172] (0xc0000f0420) (0xc00028e000) Stream removed, broadcasting: 3\nI0318 11:42:56.574696 2381 log.go:172] (0xc0000f0420) (0xc0006fa000) Stream removed, broadcasting: 5\n" Mar 18 11:42:56.577: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 11:42:56.577: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 11:42:56.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2zqc2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 11:42:56.789: INFO: stderr: "I0318 11:42:56.705420 2404 log.go:172] (0xc0006244d0) (0xc0007e1360) Create stream\nI0318 11:42:56.705478 2404 log.go:172] (0xc0006244d0) (0xc0007e1360) Stream added, broadcasting: 1\nI0318 11:42:56.707876 2404 log.go:172] (0xc0006244d0) Reply frame received for 1\nI0318 11:42:56.707953 2404 log.go:172] (0xc0006244d0) (0xc00061e000) Create stream\nI0318 11:42:56.707987 2404 log.go:172] (0xc0006244d0) (0xc00061e000) Stream added, broadcasting: 3\nI0318 11:42:56.708947 2404 log.go:172] (0xc0006244d0) Reply frame received for 3\nI0318 11:42:56.709040 2404 log.go:172] (0xc0006244d0) (0xc000622000) Create stream\nI0318 11:42:56.709064 2404 log.go:172] (0xc0006244d0) (0xc000622000) Stream added, broadcasting: 5\nI0318 11:42:56.709910 2404 log.go:172] (0xc0006244d0) Reply frame received for 5\nI0318 11:42:56.783161 2404 log.go:172] (0xc0006244d0) Data frame received for 5\nI0318 11:42:56.783194 2404 log.go:172] (0xc000622000) (5) Data frame handling\nI0318 11:42:56.783221 2404 log.go:172] (0xc0006244d0) Data frame received for 3\nI0318 11:42:56.783230 2404 log.go:172] (0xc00061e000) (3) Data frame handling\nI0318 11:42:56.783240 2404 log.go:172] (0xc00061e000) (3) Data frame sent\nI0318 11:42:56.783248 2404 log.go:172] (0xc0006244d0) Data frame received for 3\nI0318 11:42:56.783256 2404 log.go:172] (0xc00061e000) (3) Data frame handling\nI0318 11:42:56.784884 2404 log.go:172] (0xc0006244d0) Data frame received for 1\nI0318 11:42:56.784913 2404 log.go:172] (0xc0007e1360) (1) Data frame handling\nI0318 11:42:56.784934 2404 log.go:172] (0xc0007e1360) (1) Data frame sent\nI0318 11:42:56.784969 2404 log.go:172] (0xc0006244d0) (0xc0007e1360) Stream removed, broadcasting: 1\nI0318 11:42:56.785006 2404 log.go:172] (0xc0006244d0) Go away received\nI0318 11:42:56.785351 2404 log.go:172] (0xc0006244d0) (0xc0007e1360) Stream removed, broadcasting: 1\nI0318 11:42:56.785383 2404 log.go:172] (0xc0006244d0) (0xc00061e000) Stream removed, broadcasting: 3\nI0318 11:42:56.785414 2404 log.go:172] (0xc0006244d0) (0xc000622000) Stream removed, broadcasting: 5\n" Mar 18 11:42:56.789: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 11:42:56.789: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 11:42:56.789: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 18 11:43:16.806: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2zqc2 Mar 18 11:43:16.808: INFO: Scaling statefulset ss to 0 Mar 18 11:43:16.816: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 11:43:16.818: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:43:16.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2zqc2" for this suite. Mar 18 11:43:22.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:43:22.897: INFO: namespace: e2e-tests-statefulset-2zqc2, resource: bindings, ignored listing per whitelist Mar 18 11:43:22.951: INFO: namespace e2e-tests-statefulset-2zqc2 deletion completed in 6.119640944s • [SLOW TEST:88.255 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:43:22.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-m7f6s/configmap-test-ac17deed-690d-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:43:23.115: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-m7f6s" to be "success or failure" Mar 18 11:43:23.120: INFO: Pod "pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565132ms Mar 18 11:43:25.146: INFO: Pod "pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031269919s Mar 18 11:43:27.151: INFO: Pod "pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03570662s STEP: Saw pod success Mar 18 11:43:27.151: INFO: Pod "pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:43:27.154: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f container env-test: STEP: delete the pod Mar 18 11:43:27.170: INFO: Waiting for pod pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:43:27.174: INFO: Pod pod-configmaps-ac19ba4a-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:43:27.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m7f6s" for this suite. Mar 18 11:43:33.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:43:33.248: INFO: namespace: e2e-tests-configmap-m7f6s, resource: bindings, ignored listing per whitelist Mar 18 11:43:33.263: INFO: namespace e2e-tests-configmap-m7f6s deletion completed in 6.085542391s • [SLOW TEST:10.312 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:43:33.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 11:43:33.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-6hq6v" to be "success or failure" Mar 18 11:43:33.451: INFO: Pod "downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.627876ms Mar 18 11:43:35.455: INFO: Pod "downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014316903s Mar 18 11:43:37.459: INFO: Pod "downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017966583s STEP: Saw pod success Mar 18 11:43:37.459: INFO: Pod "downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:43:37.462: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 11:43:37.482: INFO: Waiting for pod downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:43:37.486: INFO: Pod downwardapi-volume-b240584f-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:43:37.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6hq6v" for this suite. Mar 18 11:43:43.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:43:43.510: INFO: namespace: e2e-tests-downward-api-6hq6v, resource: bindings, ignored listing per whitelist Mar 18 11:43:43.578: INFO: namespace e2e-tests-downward-api-6hq6v deletion completed in 6.088480301s • [SLOW TEST:10.315 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:43:43.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 18 11:43:43.679: INFO: Waiting up to 5m0s for pod "pod-b85a9ee2-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-94fk9" to be "success or failure" Mar 18 11:43:43.683: INFO: Pod "pod-b85a9ee2-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.70861ms Mar 18 11:43:45.686: INFO: Pod "pod-b85a9ee2-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007357172s Mar 18 11:43:47.690: INFO: Pod "pod-b85a9ee2-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011482378s STEP: Saw pod success Mar 18 11:43:47.690: INFO: Pod "pod-b85a9ee2-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:43:47.693: INFO: Trying to get logs from node hunter-worker pod pod-b85a9ee2-690d-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:43:47.725: INFO: Waiting for pod pod-b85a9ee2-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:43:47.737: INFO: Pod pod-b85a9ee2-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:43:47.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-94fk9" for this suite. Mar 18 11:43:53.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:43:53.822: INFO: namespace: e2e-tests-emptydir-94fk9, resource: bindings, ignored listing per whitelist Mar 18 11:43:53.829: INFO: namespace e2e-tests-emptydir-94fk9 deletion completed in 6.089334451s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:43:53.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:43:53.953: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 18 11:43:53.959: INFO: Number of nodes with available pods: 0 Mar 18 11:43:53.959: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 18 11:43:54.001: INFO: Number of nodes with available pods: 0 Mar 18 11:43:54.001: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:43:55.008: INFO: Number of nodes with available pods: 0 Mar 18 11:43:55.008: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:43:56.005: INFO: Number of nodes with available pods: 0 Mar 18 11:43:56.005: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:43:57.006: INFO: Number of nodes with available pods: 1 Mar 18 11:43:57.006: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 18 11:43:57.035: INFO: Number of nodes with available pods: 1 Mar 18 11:43:57.035: INFO: Number of running nodes: 0, number of available pods: 1 Mar 18 11:43:58.040: INFO: Number of nodes with available pods: 0 Mar 18 11:43:58.040: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 18 11:43:58.050: INFO: Number of nodes with available pods: 0 Mar 18 11:43:58.050: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:43:59.123: INFO: Number of nodes with available pods: 0 Mar 18 11:43:59.123: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:00.105: INFO: Number of nodes with available pods: 0 Mar 18 11:44:00.105: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:01.057: INFO: Number of nodes with available pods: 0 Mar 18 11:44:01.057: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:02.054: INFO: Number of nodes with available pods: 0 Mar 18 11:44:02.054: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:03.054: INFO: Number of nodes with available pods: 0 Mar 18 11:44:03.054: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:04.055: INFO: Number of nodes with available pods: 0 Mar 18 11:44:04.055: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:05.053: INFO: Number of nodes with available pods: 0 Mar 18 11:44:05.053: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:06.054: INFO: Number of nodes with available pods: 0 Mar 18 11:44:06.054: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:07.063: INFO: Number of nodes with available pods: 0 Mar 18 11:44:07.063: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:08.055: INFO: Number of nodes with available pods: 0 Mar 18 11:44:08.055: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:09.056: INFO: Number of nodes with available pods: 0 Mar 18 11:44:09.056: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:10.054: INFO: Number of nodes with available pods: 0 Mar 18 11:44:10.055: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:11.055: INFO: Number of nodes with available pods: 0 Mar 18 11:44:11.055: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:12.055: INFO: Number of nodes with available pods: 0 Mar 18 11:44:12.055: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:13.054: INFO: Number of nodes with available pods: 0 Mar 18 11:44:13.054: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:14.054: INFO: Number of nodes with available pods: 0 Mar 18 11:44:14.054: INFO: Node hunter-worker is running more than one daemon pod Mar 18 11:44:15.055: INFO: Number of nodes with available pods: 1 Mar 18 11:44:15.055: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kj99l, will wait for the garbage collector to delete the pods Mar 18 11:44:15.120: INFO: Deleting DaemonSet.extensions daemon-set took: 6.331019ms Mar 18 11:44:15.220: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237002ms Mar 18 11:44:21.323: INFO: Number of nodes with available pods: 0 Mar 18 11:44:21.323: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 11:44:21.326: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kj99l/daemonsets","resourceVersion":"494406"},"items":null} Mar 18 11:44:21.329: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kj99l/pods","resourceVersion":"494406"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:44:21.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kj99l" for this suite. Mar 18 11:44:27.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:44:27.411: INFO: namespace: e2e-tests-daemonsets-kj99l, resource: bindings, ignored listing per whitelist Mar 18 11:44:27.443: INFO: namespace e2e-tests-daemonsets-kj99l deletion completed in 6.078617486s • [SLOW TEST:33.613 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:44:27.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 18 11:44:27.575: INFO: Waiting up to 5m0s for pod "pod-d27f3203-690d-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-8st4c" to be "success or failure" Mar 18 11:44:27.577: INFO: Pod "pod-d27f3203-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.536789ms Mar 18 11:44:29.608: INFO: Pod "pod-d27f3203-690d-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033367986s Mar 18 11:44:31.612: INFO: Pod "pod-d27f3203-690d-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036948677s STEP: Saw pod success Mar 18 11:44:31.612: INFO: Pod "pod-d27f3203-690d-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:44:31.615: INFO: Trying to get logs from node hunter-worker pod pod-d27f3203-690d-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:44:31.645: INFO: Waiting for pod pod-d27f3203-690d-11ea-9856-0242ac11000f to disappear Mar 18 11:44:31.680: INFO: Pod pod-d27f3203-690d-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:44:31.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8st4c" for this suite. Mar 18 11:44:37.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:44:37.744: INFO: namespace: e2e-tests-emptydir-8st4c, resource: bindings, ignored listing per whitelist Mar 18 11:44:37.776: INFO: namespace e2e-tests-emptydir-8st4c deletion completed in 6.092454205s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:44:37.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 18 11:44:38.478: INFO: Pod name wrapped-volume-race-d9030f3c-690d-11ea-9856-0242ac11000f: Found 0 pods out of 5 Mar 18 11:44:43.484: INFO: Pod name wrapped-volume-race-d9030f3c-690d-11ea-9856-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d9030f3c-690d-11ea-9856-0242ac11000f in namespace e2e-tests-emptydir-wrapper-st299, will wait for the garbage collector to delete the pods Mar 18 11:46:25.568: INFO: Deleting ReplicationController wrapped-volume-race-d9030f3c-690d-11ea-9856-0242ac11000f took: 7.204285ms Mar 18 11:46:25.668: INFO: Terminating ReplicationController wrapped-volume-race-d9030f3c-690d-11ea-9856-0242ac11000f pods took: 100.218067ms STEP: Creating RC which spawns configmap-volume pods Mar 18 11:47:11.495: INFO: Pod name wrapped-volume-race-3436cb66-690e-11ea-9856-0242ac11000f: Found 0 pods out of 5 Mar 18 11:47:16.503: INFO: Pod name wrapped-volume-race-3436cb66-690e-11ea-9856-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3436cb66-690e-11ea-9856-0242ac11000f in namespace e2e-tests-emptydir-wrapper-st299, will wait for the garbage collector to delete the pods Mar 18 11:49:30.588: INFO: Deleting ReplicationController wrapped-volume-race-3436cb66-690e-11ea-9856-0242ac11000f took: 8.028144ms Mar 18 11:49:30.688: INFO: Terminating ReplicationController wrapped-volume-race-3436cb66-690e-11ea-9856-0242ac11000f pods took: 100.272759ms STEP: Creating RC which spawns configmap-volume pods Mar 18 11:50:12.540: INFO: Pod name wrapped-volume-race-a01c4ba0-690e-11ea-9856-0242ac11000f: Found 0 pods out of 5 Mar 18 11:50:17.549: INFO: Pod name wrapped-volume-race-a01c4ba0-690e-11ea-9856-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a01c4ba0-690e-11ea-9856-0242ac11000f in namespace e2e-tests-emptydir-wrapper-st299, will wait for the garbage collector to delete the pods Mar 18 11:52:01.634: INFO: Deleting ReplicationController wrapped-volume-race-a01c4ba0-690e-11ea-9856-0242ac11000f took: 7.275742ms Mar 18 11:52:01.834: INFO: Terminating ReplicationController wrapped-volume-race-a01c4ba0-690e-11ea-9856-0242ac11000f pods took: 200.276505ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:52:42.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-st299" for this suite. Mar 18 11:52:50.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:52:50.966: INFO: namespace: e2e-tests-emptydir-wrapper-st299, resource: bindings, ignored listing per whitelist Mar 18 11:52:50.973: INFO: namespace e2e-tests-emptydir-wrapper-st299 deletion completed in 8.090135549s • [SLOW TEST:493.196 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:52:50.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-fea4f7ec-690e-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:52:51.110: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-pvvfp" to be "success or failure" Mar 18 11:52:51.125: INFO: Pod "pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.664842ms Mar 18 11:52:53.128: INFO: Pod "pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018428389s Mar 18 11:52:55.132: INFO: Pod "pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02184538s STEP: Saw pod success Mar 18 11:52:55.132: INFO: Pod "pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:52:55.134: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 11:52:55.157: INFO: Waiting for pod pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f to disappear Mar 18 11:52:55.162: INFO: Pod pod-projected-configmaps-fea6ce2d-690e-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:52:55.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pvvfp" for this suite. Mar 18 11:53:01.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:53:01.231: INFO: namespace: e2e-tests-projected-pvvfp, resource: bindings, ignored listing per whitelist Mar 18 11:53:01.265: INFO: namespace e2e-tests-projected-pvvfp deletion completed in 6.099456571s • [SLOW TEST:10.292 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:53:01.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jtwcg Mar 18 11:53:05.379: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jtwcg STEP: checking the pod's current state and verifying that restartCount is present Mar 18 11:53:05.382: INFO: Initial restart count of pod liveness-http is 0 Mar 18 11:53:25.425: INFO: Restart count of pod e2e-tests-container-probe-jtwcg/liveness-http is now 1 (20.043399632s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:53:25.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jtwcg" for this suite. Mar 18 11:53:31.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:53:31.469: INFO: namespace: e2e-tests-container-probe-jtwcg, resource: bindings, ignored listing per whitelist Mar 18 11:53:31.525: INFO: namespace e2e-tests-container-probe-jtwcg deletion completed in 6.087410819s • [SLOW TEST:30.260 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:53:31.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 18 11:53:31.641: INFO: Waiting up to 5m0s for pod "pod-16ca1315-690f-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-2vft7" to be "success or failure" Mar 18 11:53:31.647: INFO: Pod "pod-16ca1315-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.680597ms Mar 18 11:53:33.651: INFO: Pod "pod-16ca1315-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010540564s Mar 18 11:53:35.655: INFO: Pod "pod-16ca1315-690f-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014683472s STEP: Saw pod success Mar 18 11:53:35.655: INFO: Pod "pod-16ca1315-690f-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:53:35.658: INFO: Trying to get logs from node hunter-worker pod pod-16ca1315-690f-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 11:53:35.692: INFO: Waiting for pod pod-16ca1315-690f-11ea-9856-0242ac11000f to disappear Mar 18 11:53:35.710: INFO: Pod pod-16ca1315-690f-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:53:35.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2vft7" for this suite. Mar 18 11:53:41.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:53:41.801: INFO: namespace: e2e-tests-emptydir-2vft7, resource: bindings, ignored listing per whitelist Mar 18 11:53:41.810: INFO: namespace e2e-tests-emptydir-2vft7 deletion completed in 6.096353444s • [SLOW TEST:10.284 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:53:41.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:53:41.932: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 18 11:53:41.949: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 18 11:53:46.954: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 11:53:46.955: INFO: Creating deployment "test-rolling-update-deployment" Mar 18 11:53:46.959: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 18 11:53:46.965: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 18 11:53:49.002: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 18 11:53:49.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720129227, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720129227, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720129227, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720129226, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 11:53:51.008: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 18 11:53:51.019: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-59gpw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59gpw/deployments/test-rolling-update-deployment,UID:1ff173aa-690f-11ea-99e8-0242ac110002,ResourceVersion:496091,Generation:1,CreationTimestamp:2020-03-18 11:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-18 11:53:47 +0000 UTC 2020-03-18 11:53:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-18 11:53:49 +0000 UTC 2020-03-18 11:53:46 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 18 11:53:51.023: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-59gpw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59gpw/replicasets/test-rolling-update-deployment-75db98fb4c,UID:1ff392eb-690f-11ea-99e8-0242ac110002,ResourceVersion:496082,Generation:1,CreationTimestamp:2020-03-18 11:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1ff173aa-690f-11ea-99e8-0242ac110002 0xc002535ba7 0xc002535ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 18 11:53:51.023: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 18 11:53:51.023: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-59gpw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59gpw/replicasets/test-rolling-update-controller,UID:1cf30bf6-690f-11ea-99e8-0242ac110002,ResourceVersion:496090,Generation:2,CreationTimestamp:2020-03-18 11:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1ff173aa-690f-11ea-99e8-0242ac110002 0xc002535ae7 0xc002535ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 11:53:51.027: INFO: Pod "test-rolling-update-deployment-75db98fb4c-ngg82" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-ngg82,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-59gpw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-59gpw/pods/test-rolling-update-deployment-75db98fb4c-ngg82,UID:1ff638e4-690f-11ea-99e8-0242ac110002,ResourceVersion:496081,Generation:0,CreationTimestamp:2020-03-18 11:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 1ff392eb-690f-11ea-99e8-0242ac110002 0xc0021af427 0xc0021af428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-447bh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-447bh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-447bh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021af500} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021af520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:53:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:53:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:53:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 11:53:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.100,StartTime:2020-03-18 11:53:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-18 11:53:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3c3e513a78d5931e0a270f572d5e1da6385e73324347279f4f5783464df00e2d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:53:51.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-59gpw" for this suite. Mar 18 11:53:57.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:53:57.089: INFO: namespace: e2e-tests-deployment-59gpw, resource: bindings, ignored listing per whitelist Mar 18 11:53:57.150: INFO: namespace e2e-tests-deployment-59gpw deletion completed in 6.118670775s • [SLOW TEST:15.340 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:53:57.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-mlrlp I0318 11:53:57.289550 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-mlrlp, replica count: 1 I0318 11:53:58.339979 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 11:53:59.340183 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 11:54:00.340431 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 11:54:00.470: INFO: Created: latency-svc-7whjn Mar 18 11:54:00.524: INFO: Got endpoints: latency-svc-7whjn [83.618923ms] Mar 18 11:54:00.559: INFO: Created: latency-svc-qmtkx Mar 18 11:54:00.567: INFO: Got endpoints: latency-svc-qmtkx [42.266637ms] Mar 18 11:54:00.593: INFO: Created: latency-svc-7vkk7 Mar 18 11:54:00.610: INFO: Got endpoints: latency-svc-7vkk7 [85.911355ms] Mar 18 11:54:00.662: INFO: Created: latency-svc-brzhw Mar 18 11:54:00.664: INFO: Got endpoints: latency-svc-brzhw [139.60518ms] Mar 18 11:54:00.708: INFO: Created: latency-svc-6n7nv Mar 18 11:54:00.739: INFO: Got endpoints: latency-svc-6n7nv [215.001788ms] Mar 18 11:54:00.789: INFO: Created: latency-svc-rrrth Mar 18 11:54:00.799: INFO: Got endpoints: latency-svc-rrrth [275.497436ms] Mar 18 11:54:00.835: INFO: Created: latency-svc-bsrdh Mar 18 11:54:00.854: INFO: Got endpoints: latency-svc-bsrdh [329.776997ms] Mar 18 11:54:00.887: INFO: Created: latency-svc-c4r22 Mar 18 11:54:00.920: INFO: Got endpoints: latency-svc-c4r22 [396.553442ms] Mar 18 11:54:00.941: INFO: Created: latency-svc-gwksp Mar 18 11:54:00.950: INFO: Got endpoints: latency-svc-gwksp [426.269125ms] Mar 18 11:54:00.983: INFO: Created: latency-svc-z9l2h Mar 18 11:54:01.011: INFO: Got endpoints: latency-svc-z9l2h [486.090292ms] Mar 18 11:54:01.059: INFO: Created: latency-svc-ghk8q Mar 18 11:54:01.062: INFO: Got endpoints: latency-svc-ghk8q [538.159026ms] Mar 18 11:54:01.087: INFO: Created: latency-svc-k9sx7 Mar 18 11:54:01.101: INFO: Got endpoints: latency-svc-k9sx7 [576.799144ms] Mar 18 11:54:01.124: INFO: Created: latency-svc-6nzd9 Mar 18 11:54:01.137: INFO: Got endpoints: latency-svc-6nzd9 [612.753296ms] Mar 18 11:54:01.157: INFO: Created: latency-svc-4m9wd Mar 18 11:54:01.196: INFO: Got endpoints: latency-svc-4m9wd [671.692005ms] Mar 18 11:54:01.211: INFO: Created: latency-svc-nr4f8 Mar 18 11:54:01.228: INFO: Got endpoints: latency-svc-nr4f8 [704.246902ms] Mar 18 11:54:01.262: INFO: Created: latency-svc-v92kd Mar 18 11:54:01.276: INFO: Got endpoints: latency-svc-v92kd [751.641842ms] Mar 18 11:54:01.334: INFO: Created: latency-svc-xvjgc Mar 18 11:54:01.337: INFO: Got endpoints: latency-svc-xvjgc [769.574211ms] Mar 18 11:54:01.385: INFO: Created: latency-svc-8ftx6 Mar 18 11:54:01.396: INFO: Got endpoints: latency-svc-8ftx6 [786.23569ms] Mar 18 11:54:01.420: INFO: Created: latency-svc-jwrkq Mar 18 11:54:01.483: INFO: Got endpoints: latency-svc-jwrkq [819.731189ms] Mar 18 11:54:01.501: INFO: Created: latency-svc-kvcqn Mar 18 11:54:01.515: INFO: Got endpoints: latency-svc-kvcqn [775.567608ms] Mar 18 11:54:01.538: INFO: Created: latency-svc-f5v62 Mar 18 11:54:01.550: INFO: Got endpoints: latency-svc-f5v62 [750.883858ms] Mar 18 11:54:01.573: INFO: Created: latency-svc-44rqv Mar 18 11:54:01.634: INFO: Got endpoints: latency-svc-44rqv [779.741691ms] Mar 18 11:54:01.636: INFO: Created: latency-svc-zzmk2 Mar 18 11:54:01.640: INFO: Got endpoints: latency-svc-zzmk2 [719.973508ms] Mar 18 11:54:01.673: INFO: Created: latency-svc-lgzb6 Mar 18 11:54:01.689: INFO: Got endpoints: latency-svc-lgzb6 [738.873469ms] Mar 18 11:54:01.708: INFO: Created: latency-svc-9wngd Mar 18 11:54:01.726: INFO: Got endpoints: latency-svc-9wngd [714.289732ms] Mar 18 11:54:01.778: INFO: Created: latency-svc-9vjjp Mar 18 11:54:01.785: INFO: Got endpoints: latency-svc-9vjjp [723.02089ms] Mar 18 11:54:01.807: INFO: Created: latency-svc-9gcvt Mar 18 11:54:01.815: INFO: Got endpoints: latency-svc-9gcvt [714.60269ms] Mar 18 11:54:01.849: INFO: Created: latency-svc-tk5qp Mar 18 11:54:01.921: INFO: Got endpoints: latency-svc-tk5qp [784.315899ms] Mar 18 11:54:01.937: INFO: Created: latency-svc-954ck Mar 18 11:54:01.948: INFO: Got endpoints: latency-svc-954ck [752.431801ms] Mar 18 11:54:01.981: INFO: Created: latency-svc-ftmml Mar 18 11:54:02.005: INFO: Got endpoints: latency-svc-ftmml [776.280097ms] Mar 18 11:54:02.071: INFO: Created: latency-svc-pczq9 Mar 18 11:54:02.073: INFO: Got endpoints: latency-svc-pczq9 [797.809044ms] Mar 18 11:54:02.110: INFO: Created: latency-svc-94dn8 Mar 18 11:54:02.123: INFO: Got endpoints: latency-svc-94dn8 [785.656641ms] Mar 18 11:54:02.147: INFO: Created: latency-svc-74nq8 Mar 18 11:54:02.159: INFO: Got endpoints: latency-svc-74nq8 [762.54067ms] Mar 18 11:54:02.235: INFO: Created: latency-svc-8bl4j Mar 18 11:54:02.237: INFO: Got endpoints: latency-svc-8bl4j [753.276223ms] Mar 18 11:54:02.299: INFO: Created: latency-svc-nxcsk Mar 18 11:54:02.310: INFO: Got endpoints: latency-svc-nxcsk [794.954527ms] Mar 18 11:54:02.389: INFO: Created: latency-svc-4dvg2 Mar 18 11:54:02.393: INFO: Got endpoints: latency-svc-4dvg2 [842.856432ms] Mar 18 11:54:02.423: INFO: Created: latency-svc-qfgvd Mar 18 11:54:02.436: INFO: Got endpoints: latency-svc-qfgvd [802.267803ms] Mar 18 11:54:02.458: INFO: Created: latency-svc-nt8q8 Mar 18 11:54:02.472: INFO: Got endpoints: latency-svc-nt8q8 [831.617839ms] Mar 18 11:54:02.580: INFO: Created: latency-svc-snss7 Mar 18 11:54:02.592: INFO: Got endpoints: latency-svc-snss7 [902.80183ms] Mar 18 11:54:02.629: INFO: Created: latency-svc-pl5br Mar 18 11:54:02.640: INFO: Got endpoints: latency-svc-pl5br [914.755207ms] Mar 18 11:54:02.665: INFO: Created: latency-svc-fkxgl Mar 18 11:54:02.677: INFO: Got endpoints: latency-svc-fkxgl [891.499091ms] Mar 18 11:54:02.730: INFO: Created: latency-svc-t6hf9 Mar 18 11:54:02.743: INFO: Got endpoints: latency-svc-t6hf9 [927.38569ms] Mar 18 11:54:02.764: INFO: Created: latency-svc-cz5gs Mar 18 11:54:02.779: INFO: Got endpoints: latency-svc-cz5gs [857.920794ms] Mar 18 11:54:02.812: INFO: Created: latency-svc-726cf Mar 18 11:54:02.879: INFO: Got endpoints: latency-svc-726cf [930.681489ms] Mar 18 11:54:02.898: INFO: Created: latency-svc-hftfw Mar 18 11:54:02.912: INFO: Got endpoints: latency-svc-hftfw [906.811187ms] Mar 18 11:54:02.936: INFO: Created: latency-svc-wk4qg Mar 18 11:54:02.948: INFO: Got endpoints: latency-svc-wk4qg [874.417067ms] Mar 18 11:54:02.976: INFO: Created: latency-svc-hhqj5 Mar 18 11:54:03.028: INFO: Got endpoints: latency-svc-hhqj5 [905.529983ms] Mar 18 11:54:03.046: INFO: Created: latency-svc-2qcrs Mar 18 11:54:03.082: INFO: Got endpoints: latency-svc-2qcrs [922.964365ms] Mar 18 11:54:03.115: INFO: Created: latency-svc-j97xp Mar 18 11:54:03.160: INFO: Got endpoints: latency-svc-j97xp [923.18507ms] Mar 18 11:54:03.168: INFO: Created: latency-svc-5bkvp Mar 18 11:54:03.180: INFO: Got endpoints: latency-svc-5bkvp [870.28826ms] Mar 18 11:54:03.208: INFO: Created: latency-svc-zhpj2 Mar 18 11:54:03.228: INFO: Got endpoints: latency-svc-zhpj2 [835.137552ms] Mar 18 11:54:03.322: INFO: Created: latency-svc-4vp74 Mar 18 11:54:03.326: INFO: Got endpoints: latency-svc-4vp74 [889.736336ms] Mar 18 11:54:03.355: INFO: Created: latency-svc-tsfrc Mar 18 11:54:03.379: INFO: Got endpoints: latency-svc-tsfrc [906.803625ms] Mar 18 11:54:03.409: INFO: Created: latency-svc-kkclh Mar 18 11:54:03.421: INFO: Got endpoints: latency-svc-kkclh [829.299307ms] Mar 18 11:54:03.490: INFO: Created: latency-svc-5jrb8 Mar 18 11:54:03.493: INFO: Got endpoints: latency-svc-5jrb8 [852.584617ms] Mar 18 11:54:03.526: INFO: Created: latency-svc-xtztj Mar 18 11:54:03.549: INFO: Got endpoints: latency-svc-xtztj [872.458275ms] Mar 18 11:54:03.574: INFO: Created: latency-svc-g259d Mar 18 11:54:03.584: INFO: Got endpoints: latency-svc-g259d [840.626378ms] Mar 18 11:54:03.634: INFO: Created: latency-svc-gcdjq Mar 18 11:54:03.638: INFO: Got endpoints: latency-svc-gcdjq [858.912765ms] Mar 18 11:54:03.660: INFO: Created: latency-svc-jpgc8 Mar 18 11:54:03.674: INFO: Got endpoints: latency-svc-jpgc8 [795.329096ms] Mar 18 11:54:03.696: INFO: Created: latency-svc-sfkjv Mar 18 11:54:03.714: INFO: Got endpoints: latency-svc-sfkjv [802.631718ms] Mar 18 11:54:03.783: INFO: Created: latency-svc-2xtrn Mar 18 11:54:03.786: INFO: Got endpoints: latency-svc-2xtrn [837.967165ms] Mar 18 11:54:03.831: INFO: Created: latency-svc-4zs82 Mar 18 11:54:03.867: INFO: Got endpoints: latency-svc-4zs82 [838.554989ms] Mar 18 11:54:03.946: INFO: Created: latency-svc-z7brq Mar 18 11:54:03.957: INFO: Got endpoints: latency-svc-z7brq [875.34492ms] Mar 18 11:54:03.978: INFO: Created: latency-svc-glwrk Mar 18 11:54:04.007: INFO: Got endpoints: latency-svc-glwrk [847.362561ms] Mar 18 11:54:04.065: INFO: Created: latency-svc-7qwbp Mar 18 11:54:04.078: INFO: Got endpoints: latency-svc-7qwbp [897.700336ms] Mar 18 11:54:04.101: INFO: Created: latency-svc-28grv Mar 18 11:54:04.114: INFO: Got endpoints: latency-svc-28grv [885.153041ms] Mar 18 11:54:04.143: INFO: Created: latency-svc-ndnmq Mar 18 11:54:04.202: INFO: Got endpoints: latency-svc-ndnmq [876.355407ms] Mar 18 11:54:04.218: INFO: Created: latency-svc-g45nm Mar 18 11:54:04.240: INFO: Got endpoints: latency-svc-g45nm [860.498323ms] Mar 18 11:54:04.260: INFO: Created: latency-svc-p42g7 Mar 18 11:54:04.277: INFO: Got endpoints: latency-svc-p42g7 [854.970778ms] Mar 18 11:54:04.352: INFO: Created: latency-svc-nkjj6 Mar 18 11:54:04.355: INFO: Got endpoints: latency-svc-nkjj6 [862.32443ms] Mar 18 11:54:04.383: INFO: Created: latency-svc-g82rf Mar 18 11:54:04.391: INFO: Got endpoints: latency-svc-g82rf [841.852244ms] Mar 18 11:54:04.413: INFO: Created: latency-svc-x6djn Mar 18 11:54:04.422: INFO: Got endpoints: latency-svc-x6djn [837.914296ms] Mar 18 11:54:04.452: INFO: Created: latency-svc-wpmjn Mar 18 11:54:04.507: INFO: Got endpoints: latency-svc-wpmjn [869.223262ms] Mar 18 11:54:04.510: INFO: Created: latency-svc-bpxdp Mar 18 11:54:04.518: INFO: Got endpoints: latency-svc-bpxdp [843.125689ms] Mar 18 11:54:04.542: INFO: Created: latency-svc-7mdgd Mar 18 11:54:04.554: INFO: Got endpoints: latency-svc-7mdgd [839.546033ms] Mar 18 11:54:04.575: INFO: Created: latency-svc-qxrvd Mar 18 11:54:04.599: INFO: Got endpoints: latency-svc-qxrvd [812.503308ms] Mar 18 11:54:04.657: INFO: Created: latency-svc-lb9d6 Mar 18 11:54:04.677: INFO: Got endpoints: latency-svc-lb9d6 [810.247335ms] Mar 18 11:54:04.709: INFO: Created: latency-svc-ct7s5 Mar 18 11:54:04.723: INFO: Got endpoints: latency-svc-ct7s5 [765.296073ms] Mar 18 11:54:04.739: INFO: Created: latency-svc-d5rvj Mar 18 11:54:04.753: INFO: Got endpoints: latency-svc-d5rvj [745.855684ms] Mar 18 11:54:04.807: INFO: Created: latency-svc-4m8zp Mar 18 11:54:04.833: INFO: Got endpoints: latency-svc-4m8zp [755.411607ms] Mar 18 11:54:04.833: INFO: Created: latency-svc-xrk78 Mar 18 11:54:04.846: INFO: Got endpoints: latency-svc-xrk78 [731.911049ms] Mar 18 11:54:04.869: INFO: Created: latency-svc-kn8x4 Mar 18 11:54:04.880: INFO: Got endpoints: latency-svc-kn8x4 [677.491144ms] Mar 18 11:54:04.963: INFO: Created: latency-svc-68nl4 Mar 18 11:54:04.979: INFO: Got endpoints: latency-svc-68nl4 [739.267211ms] Mar 18 11:54:05.009: INFO: Created: latency-svc-clv42 Mar 18 11:54:05.024: INFO: Got endpoints: latency-svc-clv42 [747.896015ms] Mar 18 11:54:05.055: INFO: Created: latency-svc-flw4g Mar 18 11:54:05.094: INFO: Got endpoints: latency-svc-flw4g [738.763385ms] Mar 18 11:54:05.109: INFO: Created: latency-svc-vhpnc Mar 18 11:54:05.132: INFO: Got endpoints: latency-svc-vhpnc [741.293356ms] Mar 18 11:54:05.166: INFO: Created: latency-svc-nmfdb Mar 18 11:54:05.181: INFO: Got endpoints: latency-svc-nmfdb [759.829805ms] Mar 18 11:54:05.263: INFO: Created: latency-svc-wzprz Mar 18 11:54:05.265: INFO: Got endpoints: latency-svc-wzprz [132.748657ms] Mar 18 11:54:05.297: INFO: Created: latency-svc-kzb2b Mar 18 11:54:05.314: INFO: Got endpoints: latency-svc-kzb2b [806.582238ms] Mar 18 11:54:05.331: INFO: Created: latency-svc-qfzqd Mar 18 11:54:05.344: INFO: Got endpoints: latency-svc-qfzqd [826.470464ms] Mar 18 11:54:05.436: INFO: Created: latency-svc-9c72f Mar 18 11:54:05.440: INFO: Got endpoints: latency-svc-9c72f [885.640272ms] Mar 18 11:54:05.475: INFO: Created: latency-svc-6hkbp Mar 18 11:54:05.488: INFO: Got endpoints: latency-svc-6hkbp [889.880645ms] Mar 18 11:54:05.513: INFO: Created: latency-svc-btrrk Mar 18 11:54:05.525: INFO: Got endpoints: latency-svc-btrrk [847.767575ms] Mar 18 11:54:05.580: INFO: Created: latency-svc-szgcv Mar 18 11:54:05.583: INFO: Got endpoints: latency-svc-szgcv [860.634249ms] Mar 18 11:54:05.627: INFO: Created: latency-svc-z5zm4 Mar 18 11:54:05.655: INFO: Got endpoints: latency-svc-z5zm4 [901.135339ms] Mar 18 11:54:05.679: INFO: Created: latency-svc-vhww6 Mar 18 11:54:05.735: INFO: Got endpoints: latency-svc-vhww6 [901.6525ms] Mar 18 11:54:05.753: INFO: Created: latency-svc-d8952 Mar 18 11:54:05.772: INFO: Got endpoints: latency-svc-d8952 [926.212298ms] Mar 18 11:54:05.795: INFO: Created: latency-svc-kz27d Mar 18 11:54:05.808: INFO: Got endpoints: latency-svc-kz27d [928.087475ms] Mar 18 11:54:05.825: INFO: Created: latency-svc-g4t8n Mar 18 11:54:05.861: INFO: Got endpoints: latency-svc-g4t8n [881.529007ms] Mar 18 11:54:05.895: INFO: Created: latency-svc-klw28 Mar 18 11:54:05.923: INFO: Got endpoints: latency-svc-klw28 [898.294407ms] Mar 18 11:54:05.955: INFO: Created: latency-svc-z7c9f Mar 18 11:54:06.016: INFO: Got endpoints: latency-svc-z7c9f [921.922744ms] Mar 18 11:54:06.020: INFO: Created: latency-svc-gm2gs Mar 18 11:54:06.025: INFO: Got endpoints: latency-svc-gm2gs [843.12668ms] Mar 18 11:54:06.047: INFO: Created: latency-svc-xl5rr Mar 18 11:54:06.061: INFO: Got endpoints: latency-svc-xl5rr [795.675635ms] Mar 18 11:54:06.083: INFO: Created: latency-svc-98rz4 Mar 18 11:54:06.097: INFO: Got endpoints: latency-svc-98rz4 [783.256972ms] Mar 18 11:54:06.116: INFO: Created: latency-svc-jvpbf Mar 18 11:54:06.154: INFO: Got endpoints: latency-svc-jvpbf [810.026673ms] Mar 18 11:54:06.170: INFO: Created: latency-svc-867xs Mar 18 11:54:06.184: INFO: Got endpoints: latency-svc-867xs [743.93561ms] Mar 18 11:54:06.206: INFO: Created: latency-svc-nzsmv Mar 18 11:54:06.232: INFO: Got endpoints: latency-svc-nzsmv [743.626879ms] Mar 18 11:54:06.286: INFO: Created: latency-svc-z6jtv Mar 18 11:54:06.298: INFO: Got endpoints: latency-svc-z6jtv [773.306219ms] Mar 18 11:54:06.333: INFO: Created: latency-svc-qdg8t Mar 18 11:54:06.346: INFO: Got endpoints: latency-svc-qdg8t [762.80963ms] Mar 18 11:54:06.369: INFO: Created: latency-svc-cmnd5 Mar 18 11:54:06.383: INFO: Got endpoints: latency-svc-cmnd5 [728.295583ms] Mar 18 11:54:06.442: INFO: Created: latency-svc-5s5c8 Mar 18 11:54:06.445: INFO: Got endpoints: latency-svc-5s5c8 [709.991912ms] Mar 18 11:54:06.473: INFO: Created: latency-svc-7bf2c Mar 18 11:54:06.485: INFO: Got endpoints: latency-svc-7bf2c [713.480642ms] Mar 18 11:54:06.509: INFO: Created: latency-svc-hkvwl Mar 18 11:54:06.522: INFO: Got endpoints: latency-svc-hkvwl [713.621392ms] Mar 18 11:54:06.538: INFO: Created: latency-svc-crjk6 Mar 18 11:54:06.598: INFO: Got endpoints: latency-svc-crjk6 [736.79885ms] Mar 18 11:54:06.599: INFO: Created: latency-svc-cxzst Mar 18 11:54:06.606: INFO: Got endpoints: latency-svc-cxzst [683.004331ms] Mar 18 11:54:06.627: INFO: Created: latency-svc-7hgxl Mar 18 11:54:06.648: INFO: Got endpoints: latency-svc-7hgxl [631.865704ms] Mar 18 11:54:06.682: INFO: Created: latency-svc-zfr4r Mar 18 11:54:06.777: INFO: Got endpoints: latency-svc-zfr4r [752.238947ms] Mar 18 11:54:06.813: INFO: Created: latency-svc-pzv9g Mar 18 11:54:06.834: INFO: Got endpoints: latency-svc-pzv9g [772.802826ms] Mar 18 11:54:06.863: INFO: Created: latency-svc-kg4t9 Mar 18 11:54:06.933: INFO: Got endpoints: latency-svc-kg4t9 [835.909174ms] Mar 18 11:54:06.935: INFO: Created: latency-svc-dxdsb Mar 18 11:54:06.943: INFO: Got endpoints: latency-svc-dxdsb [788.518849ms] Mar 18 11:54:06.960: INFO: Created: latency-svc-687ms Mar 18 11:54:06.973: INFO: Got endpoints: latency-svc-687ms [789.25952ms] Mar 18 11:54:07.004: INFO: Created: latency-svc-525kl Mar 18 11:54:07.065: INFO: Got endpoints: latency-svc-525kl [832.345383ms] Mar 18 11:54:07.077: INFO: Created: latency-svc-6jb4b Mar 18 11:54:07.087: INFO: Got endpoints: latency-svc-6jb4b [788.72221ms] Mar 18 11:54:07.109: INFO: Created: latency-svc-qdw9n Mar 18 11:54:07.124: INFO: Got endpoints: latency-svc-qdw9n [777.446477ms] Mar 18 11:54:07.157: INFO: Created: latency-svc-q4dj6 Mar 18 11:54:07.196: INFO: Got endpoints: latency-svc-q4dj6 [813.105019ms] Mar 18 11:54:07.205: INFO: Created: latency-svc-g2qwm Mar 18 11:54:07.459: INFO: Got endpoints: latency-svc-g2qwm [1.014549407s] Mar 18 11:54:07.480: INFO: Created: latency-svc-7fr9s Mar 18 11:54:07.496: INFO: Got endpoints: latency-svc-7fr9s [1.010934046s] Mar 18 11:54:07.523: INFO: Created: latency-svc-2bkft Mar 18 11:54:07.538: INFO: Got endpoints: latency-svc-2bkft [1.016755517s] Mar 18 11:54:07.559: INFO: Created: latency-svc-2mpcv Mar 18 11:54:07.597: INFO: Got endpoints: latency-svc-2mpcv [999.508724ms] Mar 18 11:54:07.613: INFO: Created: latency-svc-xf4vc Mar 18 11:54:07.645: INFO: Got endpoints: latency-svc-xf4vc [1.039354116s] Mar 18 11:54:07.688: INFO: Created: latency-svc-jzslf Mar 18 11:54:07.771: INFO: Got endpoints: latency-svc-jzslf [1.122665595s] Mar 18 11:54:07.773: INFO: Created: latency-svc-2fzhx Mar 18 11:54:07.786: INFO: Got endpoints: latency-svc-2fzhx [1.008541256s] Mar 18 11:54:07.823: INFO: Created: latency-svc-8bxd5 Mar 18 11:54:07.846: INFO: Got endpoints: latency-svc-8bxd5 [1.011815969s] Mar 18 11:54:07.871: INFO: Created: latency-svc-7fdkt Mar 18 11:54:07.933: INFO: Got endpoints: latency-svc-7fdkt [999.389791ms] Mar 18 11:54:07.934: INFO: Created: latency-svc-l6qfz Mar 18 11:54:07.954: INFO: Got endpoints: latency-svc-l6qfz [1.01149928s] Mar 18 11:54:07.999: INFO: Created: latency-svc-8klr2 Mar 18 11:54:08.076: INFO: Got endpoints: latency-svc-8klr2 [1.103411711s] Mar 18 11:54:08.090: INFO: Created: latency-svc-c9hzj Mar 18 11:54:08.116: INFO: Got endpoints: latency-svc-c9hzj [1.051007944s] Mar 18 11:54:08.146: INFO: Created: latency-svc-xxtqv Mar 18 11:54:08.159: INFO: Got endpoints: latency-svc-xxtqv [1.071889945s] Mar 18 11:54:08.176: INFO: Created: latency-svc-9fljb Mar 18 11:54:08.238: INFO: Got endpoints: latency-svc-9fljb [1.114421117s] Mar 18 11:54:08.240: INFO: Created: latency-svc-fzl6p Mar 18 11:54:08.243: INFO: Got endpoints: latency-svc-fzl6p [1.047289437s] Mar 18 11:54:08.263: INFO: Created: latency-svc-pmjwg Mar 18 11:54:08.274: INFO: Got endpoints: latency-svc-pmjwg [814.674495ms] Mar 18 11:54:08.294: INFO: Created: latency-svc-5llzk Mar 18 11:54:08.305: INFO: Got endpoints: latency-svc-5llzk [808.082921ms] Mar 18 11:54:08.326: INFO: Created: latency-svc-qv4vm Mar 18 11:54:08.370: INFO: Got endpoints: latency-svc-qv4vm [831.498769ms] Mar 18 11:54:08.380: INFO: Created: latency-svc-hpb2s Mar 18 11:54:08.401: INFO: Got endpoints: latency-svc-hpb2s [803.789427ms] Mar 18 11:54:08.422: INFO: Created: latency-svc-qt7cx Mar 18 11:54:08.437: INFO: Got endpoints: latency-svc-qt7cx [791.986695ms] Mar 18 11:54:08.526: INFO: Created: latency-svc-qx8mp Mar 18 11:54:08.528: INFO: Got endpoints: latency-svc-qx8mp [757.447748ms] Mar 18 11:54:08.551: INFO: Created: latency-svc-6mt9g Mar 18 11:54:08.564: INFO: Got endpoints: latency-svc-6mt9g [778.4844ms] Mar 18 11:54:08.582: INFO: Created: latency-svc-mm5sk Mar 18 11:54:08.594: INFO: Got endpoints: latency-svc-mm5sk [748.317593ms] Mar 18 11:54:08.614: INFO: Created: latency-svc-dxlhd Mar 18 11:54:08.705: INFO: Got endpoints: latency-svc-dxlhd [772.328486ms] Mar 18 11:54:08.707: INFO: Created: latency-svc-6ph4b Mar 18 11:54:08.715: INFO: Got endpoints: latency-svc-6ph4b [760.197596ms] Mar 18 11:54:08.731: INFO: Created: latency-svc-fvlkd Mar 18 11:54:08.745: INFO: Got endpoints: latency-svc-fvlkd [668.879177ms] Mar 18 11:54:08.770: INFO: Created: latency-svc-q7dvq Mar 18 11:54:08.775: INFO: Got endpoints: latency-svc-q7dvq [659.524931ms] Mar 18 11:54:08.798: INFO: Created: latency-svc-zkbpb Mar 18 11:54:08.879: INFO: Got endpoints: latency-svc-zkbpb [719.540598ms] Mar 18 11:54:08.883: INFO: Created: latency-svc-pbpzj Mar 18 11:54:08.896: INFO: Got endpoints: latency-svc-pbpzj [657.497259ms] Mar 18 11:54:08.914: INFO: Created: latency-svc-85rl8 Mar 18 11:54:08.926: INFO: Got endpoints: latency-svc-85rl8 [682.581087ms] Mar 18 11:54:08.947: INFO: Created: latency-svc-jrqcc Mar 18 11:54:09.004: INFO: Got endpoints: latency-svc-jrqcc [730.040547ms] Mar 18 11:54:09.027: INFO: Created: latency-svc-878qv Mar 18 11:54:09.061: INFO: Got endpoints: latency-svc-878qv [756.356187ms] Mar 18 11:54:09.087: INFO: Created: latency-svc-wxtnk Mar 18 11:54:09.101: INFO: Got endpoints: latency-svc-wxtnk [731.096705ms] Mar 18 11:54:09.149: INFO: Created: latency-svc-ddzpv Mar 18 11:54:09.151: INFO: Got endpoints: latency-svc-ddzpv [749.611074ms] Mar 18 11:54:09.172: INFO: Created: latency-svc-2skvv Mar 18 11:54:09.198: INFO: Got endpoints: latency-svc-2skvv [760.707941ms] Mar 18 11:54:09.224: INFO: Created: latency-svc-7h9kb Mar 18 11:54:09.241: INFO: Got endpoints: latency-svc-7h9kb [711.99931ms] Mar 18 11:54:09.307: INFO: Created: latency-svc-bl927 Mar 18 11:54:09.318: INFO: Got endpoints: latency-svc-bl927 [754.033144ms] Mar 18 11:54:09.340: INFO: Created: latency-svc-74r2d Mar 18 11:54:09.355: INFO: Got endpoints: latency-svc-74r2d [760.524201ms] Mar 18 11:54:09.375: INFO: Created: latency-svc-5lrrd Mar 18 11:54:09.413: INFO: Got endpoints: latency-svc-5lrrd [707.341312ms] Mar 18 11:54:09.423: INFO: Created: latency-svc-2trv2 Mar 18 11:54:09.439: INFO: Got endpoints: latency-svc-2trv2 [724.856388ms] Mar 18 11:54:09.469: INFO: Created: latency-svc-clltt Mar 18 11:54:09.482: INFO: Got endpoints: latency-svc-clltt [736.437438ms] Mar 18 11:54:09.499: INFO: Created: latency-svc-8qw8v Mar 18 11:54:09.512: INFO: Got endpoints: latency-svc-8qw8v [736.614275ms] Mar 18 11:54:09.562: INFO: Created: latency-svc-jd5dh Mar 18 11:54:09.564: INFO: Got endpoints: latency-svc-jd5dh [685.375642ms] Mar 18 11:54:09.622: INFO: Created: latency-svc-d68s6 Mar 18 11:54:09.632: INFO: Got endpoints: latency-svc-d68s6 [736.733286ms] Mar 18 11:54:09.651: INFO: Created: latency-svc-rkqqk Mar 18 11:54:09.693: INFO: Got endpoints: latency-svc-rkqqk [767.171695ms] Mar 18 11:54:09.709: INFO: Created: latency-svc-scf74 Mar 18 11:54:09.723: INFO: Got endpoints: latency-svc-scf74 [718.546421ms] Mar 18 11:54:09.745: INFO: Created: latency-svc-2xvht Mar 18 11:54:09.753: INFO: Got endpoints: latency-svc-2xvht [691.97293ms] Mar 18 11:54:09.775: INFO: Created: latency-svc-jrz9t Mar 18 11:54:09.783: INFO: Got endpoints: latency-svc-jrz9t [682.103953ms] Mar 18 11:54:09.855: INFO: Created: latency-svc-k26zq Mar 18 11:54:09.858: INFO: Got endpoints: latency-svc-k26zq [707.049241ms] Mar 18 11:54:09.879: INFO: Created: latency-svc-czkfn Mar 18 11:54:09.892: INFO: Got endpoints: latency-svc-czkfn [693.697363ms] Mar 18 11:54:09.909: INFO: Created: latency-svc-ds8g5 Mar 18 11:54:09.936: INFO: Got endpoints: latency-svc-ds8g5 [695.88685ms] Mar 18 11:54:09.999: INFO: Created: latency-svc-s7l7t Mar 18 11:54:10.001: INFO: Got endpoints: latency-svc-s7l7t [683.108467ms] Mar 18 11:54:10.027: INFO: Created: latency-svc-7nb67 Mar 18 11:54:10.043: INFO: Got endpoints: latency-svc-7nb67 [688.409824ms] Mar 18 11:54:10.077: INFO: Created: latency-svc-9gggb Mar 18 11:54:10.092: INFO: Got endpoints: latency-svc-9gggb [679.337883ms] Mar 18 11:54:10.179: INFO: Created: latency-svc-7kv86 Mar 18 11:54:10.181: INFO: Got endpoints: latency-svc-7kv86 [741.77934ms] Mar 18 11:54:10.207: INFO: Created: latency-svc-phf58 Mar 18 11:54:10.218: INFO: Got endpoints: latency-svc-phf58 [736.325071ms] Mar 18 11:54:10.243: INFO: Created: latency-svc-w8hd6 Mar 18 11:54:10.272: INFO: Got endpoints: latency-svc-w8hd6 [760.414485ms] Mar 18 11:54:10.322: INFO: Created: latency-svc-qc2jd Mar 18 11:54:10.326: INFO: Got endpoints: latency-svc-qc2jd [761.325854ms] Mar 18 11:54:10.371: INFO: Created: latency-svc-nvj7h Mar 18 11:54:10.387: INFO: Got endpoints: latency-svc-nvj7h [754.2137ms] Mar 18 11:54:10.419: INFO: Created: latency-svc-vqzpv Mar 18 11:54:10.490: INFO: Got endpoints: latency-svc-vqzpv [796.517754ms] Mar 18 11:54:10.493: INFO: Created: latency-svc-dsk8f Mar 18 11:54:10.501: INFO: Got endpoints: latency-svc-dsk8f [778.36826ms] Mar 18 11:54:10.537: INFO: Created: latency-svc-mqvn6 Mar 18 11:54:10.569: INFO: Created: latency-svc-4b7fq Mar 18 11:54:10.628: INFO: Got endpoints: latency-svc-mqvn6 [874.611327ms] Mar 18 11:54:10.628: INFO: Got endpoints: latency-svc-4b7fq [844.6167ms] Mar 18 11:54:10.665: INFO: Created: latency-svc-jqcdx Mar 18 11:54:10.676: INFO: Got endpoints: latency-svc-jqcdx [818.36094ms] Mar 18 11:54:10.705: INFO: Created: latency-svc-84f4h Mar 18 11:54:10.718: INFO: Got endpoints: latency-svc-84f4h [826.041587ms] Mar 18 11:54:10.777: INFO: Created: latency-svc-t95jv Mar 18 11:54:10.780: INFO: Got endpoints: latency-svc-t95jv [843.607952ms] Mar 18 11:54:10.813: INFO: Created: latency-svc-2wd64 Mar 18 11:54:10.827: INFO: Got endpoints: latency-svc-2wd64 [825.60579ms] Mar 18 11:54:10.845: INFO: Created: latency-svc-4fsbg Mar 18 11:54:10.857: INFO: Got endpoints: latency-svc-4fsbg [814.173384ms] Mar 18 11:54:10.875: INFO: Created: latency-svc-7k4bc Mar 18 11:54:10.939: INFO: Got endpoints: latency-svc-7k4bc [846.478707ms] Mar 18 11:54:10.941: INFO: Created: latency-svc-7z2zz Mar 18 11:54:10.948: INFO: Got endpoints: latency-svc-7z2zz [767.031753ms] Mar 18 11:54:10.969: INFO: Created: latency-svc-gb8sw Mar 18 11:54:10.984: INFO: Got endpoints: latency-svc-gb8sw [766.257416ms] Mar 18 11:54:11.011: INFO: Created: latency-svc-bk79w Mar 18 11:54:11.027: INFO: Got endpoints: latency-svc-bk79w [754.430735ms] Mar 18 11:54:11.101: INFO: Created: latency-svc-4kc79 Mar 18 11:54:11.104: INFO: Got endpoints: latency-svc-4kc79 [778.513828ms] Mar 18 11:54:11.128: INFO: Created: latency-svc-c56jw Mar 18 11:54:11.162: INFO: Got endpoints: latency-svc-c56jw [775.61194ms] Mar 18 11:54:11.163: INFO: Created: latency-svc-rjf72 Mar 18 11:54:11.176: INFO: Got endpoints: latency-svc-rjf72 [686.297631ms] Mar 18 11:54:11.199: INFO: Created: latency-svc-bcgn5 Mar 18 11:54:11.256: INFO: Got endpoints: latency-svc-bcgn5 [754.713393ms] Mar 18 11:54:11.256: INFO: Latencies: [42.266637ms 85.911355ms 132.748657ms 139.60518ms 215.001788ms 275.497436ms 329.776997ms 396.553442ms 426.269125ms 486.090292ms 538.159026ms 576.799144ms 612.753296ms 631.865704ms 657.497259ms 659.524931ms 668.879177ms 671.692005ms 677.491144ms 679.337883ms 682.103953ms 682.581087ms 683.004331ms 683.108467ms 685.375642ms 686.297631ms 688.409824ms 691.97293ms 693.697363ms 695.88685ms 704.246902ms 707.049241ms 707.341312ms 709.991912ms 711.99931ms 713.480642ms 713.621392ms 714.289732ms 714.60269ms 718.546421ms 719.540598ms 719.973508ms 723.02089ms 724.856388ms 728.295583ms 730.040547ms 731.096705ms 731.911049ms 736.325071ms 736.437438ms 736.614275ms 736.733286ms 736.79885ms 738.763385ms 738.873469ms 739.267211ms 741.293356ms 741.77934ms 743.626879ms 743.93561ms 745.855684ms 747.896015ms 748.317593ms 749.611074ms 750.883858ms 751.641842ms 752.238947ms 752.431801ms 753.276223ms 754.033144ms 754.2137ms 754.430735ms 754.713393ms 755.411607ms 756.356187ms 757.447748ms 759.829805ms 760.197596ms 760.414485ms 760.524201ms 760.707941ms 761.325854ms 762.54067ms 762.80963ms 765.296073ms 766.257416ms 767.031753ms 767.171695ms 769.574211ms 772.328486ms 772.802826ms 773.306219ms 775.567608ms 775.61194ms 776.280097ms 777.446477ms 778.36826ms 778.4844ms 778.513828ms 779.741691ms 783.256972ms 784.315899ms 785.656641ms 786.23569ms 788.518849ms 788.72221ms 789.25952ms 791.986695ms 794.954527ms 795.329096ms 795.675635ms 796.517754ms 797.809044ms 802.267803ms 802.631718ms 803.789427ms 806.582238ms 808.082921ms 810.026673ms 810.247335ms 812.503308ms 813.105019ms 814.173384ms 814.674495ms 818.36094ms 819.731189ms 825.60579ms 826.041587ms 826.470464ms 829.299307ms 831.498769ms 831.617839ms 832.345383ms 835.137552ms 835.909174ms 837.914296ms 837.967165ms 838.554989ms 839.546033ms 840.626378ms 841.852244ms 842.856432ms 843.125689ms 843.12668ms 843.607952ms 844.6167ms 846.478707ms 847.362561ms 847.767575ms 852.584617ms 854.970778ms 857.920794ms 858.912765ms 860.498323ms 860.634249ms 862.32443ms 869.223262ms 870.28826ms 872.458275ms 874.417067ms 874.611327ms 875.34492ms 876.355407ms 881.529007ms 885.153041ms 885.640272ms 889.736336ms 889.880645ms 891.499091ms 897.700336ms 898.294407ms 901.135339ms 901.6525ms 902.80183ms 905.529983ms 906.803625ms 906.811187ms 914.755207ms 921.922744ms 922.964365ms 923.18507ms 926.212298ms 927.38569ms 928.087475ms 930.681489ms 999.389791ms 999.508724ms 1.008541256s 1.010934046s 1.01149928s 1.011815969s 1.014549407s 1.016755517s 1.039354116s 1.047289437s 1.051007944s 1.071889945s 1.103411711s 1.114421117s 1.122665595s] Mar 18 11:54:11.257: INFO: 50 %ile: 783.256972ms Mar 18 11:54:11.257: INFO: 90 %ile: 923.18507ms Mar 18 11:54:11.257: INFO: 99 %ile: 1.114421117s Mar 18 11:54:11.257: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:54:11.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-mlrlp" for this suite. Mar 18 11:54:45.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:54:45.284: INFO: namespace: e2e-tests-svc-latency-mlrlp, resource: bindings, ignored listing per whitelist Mar 18 11:54:45.359: INFO: namespace e2e-tests-svc-latency-mlrlp deletion completed in 34.098984259s • [SLOW TEST:48.209 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:54:45.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:54:45.425: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:54:49.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-l5z79" for this suite. Mar 18 11:55:35.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:55:35.620: INFO: namespace: e2e-tests-pods-l5z79, resource: bindings, ignored listing per whitelist Mar 18 11:55:35.664: INFO: namespace e2e-tests-pods-l5z79 deletion completed in 46.084482134s • [SLOW TEST:50.304 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:55:35.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0318 11:55:46.483515 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 11:55:46.483: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:55:46.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6dkzt" for this suite. Mar 18 11:55:54.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:55:54.567: INFO: namespace: e2e-tests-gc-6dkzt, resource: bindings, ignored listing per whitelist Mar 18 11:55:54.580: INFO: namespace e2e-tests-gc-6dkzt deletion completed in 8.093045203s • [SLOW TEST:18.916 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:55:54.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0318 11:56:34.938860 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 11:56:34.938: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:56:34.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-djpdd" for this suite. Mar 18 11:56:44.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:56:45.005: INFO: namespace: e2e-tests-gc-djpdd, resource: bindings, ignored listing per whitelist Mar 18 11:56:45.050: INFO: namespace e2e-tests-gc-djpdd deletion completed in 10.107935626s • [SLOW TEST:50.470 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:56:45.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 18 11:56:49.717: INFO: Successfully updated pod "annotationupdate8a2a9542-690f-11ea-9856-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:56:51.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4rgh8" for this suite. Mar 18 11:57:13.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:57:13.799: INFO: namespace: e2e-tests-projected-4rgh8, resource: bindings, ignored listing per whitelist Mar 18 11:57:13.830: INFO: namespace e2e-tests-projected-4rgh8 deletion completed in 22.091661951s • [SLOW TEST:28.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:57:13.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 18 11:57:18.021: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:57:42.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-bqg44" for this suite. Mar 18 11:57:48.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:57:48.204: INFO: namespace: e2e-tests-namespaces-bqg44, resource: bindings, ignored listing per whitelist Mar 18 11:57:48.221: INFO: namespace e2e-tests-namespaces-bqg44 deletion completed in 6.092229936s STEP: Destroying namespace "e2e-tests-nsdeletetest-clqfm" for this suite. Mar 18 11:57:48.224: INFO: Namespace e2e-tests-nsdeletetest-clqfm was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-f6m64" for this suite. Mar 18 11:57:54.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:57:54.312: INFO: namespace: e2e-tests-nsdeletetest-f6m64, resource: bindings, ignored listing per whitelist Mar 18 11:57:54.326: INFO: namespace e2e-tests-nsdeletetest-f6m64 deletion completed in 6.102193109s • [SLOW TEST:40.496 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:57:54.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 11:57:58.470: INFO: Waiting up to 5m0s for pod "client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f" in namespace "e2e-tests-pods-nq5kv" to be "success or failure" Mar 18 11:57:58.505: INFO: Pod "client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.04843ms Mar 18 11:58:00.510: INFO: Pod "client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039463942s Mar 18 11:58:02.514: INFO: Pod "client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043876407s STEP: Saw pod success Mar 18 11:58:02.514: INFO: Pod "client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:58:02.517: INFO: Trying to get logs from node hunter-worker pod client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f container env3cont: STEP: delete the pod Mar 18 11:58:02.536: INFO: Waiting for pod client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f to disappear Mar 18 11:58:02.541: INFO: Pod client-envvars-b5d9cd72-690f-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:58:02.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nq5kv" for this suite. Mar 18 11:58:46.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:58:46.606: INFO: namespace: e2e-tests-pods-nq5kv, resource: bindings, ignored listing per whitelist Mar 18 11:58:46.636: INFO: namespace e2e-tests-pods-nq5kv deletion completed in 44.0918837s • [SLOW TEST:52.309 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:58:46.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d2a07131-690f-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:58:46.763: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-gnmf4" to be "success or failure" Mar 18 11:58:46.775: INFO: Pod "pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.101035ms Mar 18 11:58:48.779: INFO: Pod "pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01576241s Mar 18 11:58:50.793: INFO: Pod "pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029969907s STEP: Saw pod success Mar 18 11:58:50.793: INFO: Pod "pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:58:50.796: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 11:58:50.819: INFO: Waiting for pod pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f to disappear Mar 18 11:58:50.823: INFO: Pod pod-projected-configmaps-d2a29047-690f-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:58:50.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gnmf4" for this suite. Mar 18 11:58:56.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:58:56.914: INFO: namespace: e2e-tests-projected-gnmf4, resource: bindings, ignored listing per whitelist Mar 18 11:58:56.918: INFO: namespace e2e-tests-projected-gnmf4 deletion completed in 6.092508248s • [SLOW TEST:10.283 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:58:56.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d8bd3770-690f-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 11:58:57.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-z8g5l" to be "success or failure" Mar 18 11:58:57.118: INFO: Pod "pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.890734ms Mar 18 11:58:59.122: INFO: Pod "pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031314743s Mar 18 11:59:01.126: INFO: Pod "pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035898336s STEP: Saw pod success Mar 18 11:59:01.126: INFO: Pod "pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 11:59:01.130: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 18 11:59:01.168: INFO: Waiting for pod pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f to disappear Mar 18 11:59:01.172: INFO: Pod pod-projected-configmaps-d8c87127-690f-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:59:01.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z8g5l" for this suite. Mar 18 11:59:07.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:59:07.241: INFO: namespace: e2e-tests-projected-z8g5l, resource: bindings, ignored listing per whitelist Mar 18 11:59:07.261: INFO: namespace e2e-tests-projected-z8g5l deletion completed in 6.086756576s • [SLOW TEST:10.343 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:59:07.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xh7x4 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 11:59:07.344: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 11:59:31.485: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.114 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xh7x4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 11:59:31.485: INFO: >>> kubeConfig: /root/.kube/config I0318 11:59:31.522204 6 log.go:172] (0xc0016682c0) (0xc001de5720) Create stream I0318 11:59:31.522232 6 log.go:172] (0xc0016682c0) (0xc001de5720) Stream added, broadcasting: 1 I0318 11:59:31.524498 6 log.go:172] (0xc0016682c0) Reply frame received for 1 I0318 11:59:31.524546 6 log.go:172] (0xc0016682c0) (0xc00110e5a0) Create stream I0318 11:59:31.524562 6 log.go:172] (0xc0016682c0) (0xc00110e5a0) Stream added, broadcasting: 3 I0318 11:59:31.525766 6 log.go:172] (0xc0016682c0) Reply frame received for 3 I0318 11:59:31.525828 6 log.go:172] (0xc0016682c0) (0xc001056820) Create stream I0318 11:59:31.525849 6 log.go:172] (0xc0016682c0) (0xc001056820) Stream added, broadcasting: 5 I0318 11:59:31.526886 6 log.go:172] (0xc0016682c0) Reply frame received for 5 I0318 11:59:32.615624 6 log.go:172] (0xc0016682c0) Data frame received for 3 I0318 11:59:32.615676 6 log.go:172] (0xc00110e5a0) (3) Data frame handling I0318 11:59:32.615714 6 log.go:172] (0xc00110e5a0) (3) Data frame sent I0318 11:59:32.615740 6 log.go:172] (0xc0016682c0) Data frame received for 3 I0318 11:59:32.615757 6 log.go:172] (0xc00110e5a0) (3) Data frame handling I0318 11:59:32.615985 6 log.go:172] (0xc0016682c0) Data frame received for 5 I0318 11:59:32.616017 6 log.go:172] (0xc001056820) (5) Data frame handling I0318 11:59:32.618227 6 log.go:172] (0xc0016682c0) Data frame received for 1 I0318 11:59:32.618270 6 log.go:172] (0xc001de5720) (1) Data frame handling I0318 11:59:32.618305 6 log.go:172] (0xc001de5720) (1) Data frame sent I0318 11:59:32.618352 6 log.go:172] (0xc0016682c0) (0xc001de5720) Stream removed, broadcasting: 1 I0318 11:59:32.618395 6 log.go:172] (0xc0016682c0) Go away received I0318 11:59:32.618490 6 log.go:172] (0xc0016682c0) (0xc001de5720) Stream removed, broadcasting: 1 I0318 11:59:32.618513 6 log.go:172] (0xc0016682c0) (0xc00110e5a0) Stream removed, broadcasting: 3 I0318 11:59:32.618540 6 log.go:172] (0xc0016682c0) (0xc001056820) Stream removed, broadcasting: 5 Mar 18 11:59:32.618: INFO: Found all expected endpoints: [netserver-0] Mar 18 11:59:32.622: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.26 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xh7x4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 11:59:32.622: INFO: >>> kubeConfig: /root/.kube/config I0318 11:59:32.648058 6 log.go:172] (0xc001668790) (0xc001de59a0) Create stream I0318 11:59:32.648090 6 log.go:172] (0xc001668790) (0xc001de59a0) Stream added, broadcasting: 1 I0318 11:59:32.650421 6 log.go:172] (0xc001668790) Reply frame received for 1 I0318 11:59:32.650455 6 log.go:172] (0xc001668790) (0xc001de5a40) Create stream I0318 11:59:32.650468 6 log.go:172] (0xc001668790) (0xc001de5a40) Stream added, broadcasting: 3 I0318 11:59:32.651242 6 log.go:172] (0xc001668790) Reply frame received for 3 I0318 11:59:32.651271 6 log.go:172] (0xc001668790) (0xc001de5ae0) Create stream I0318 11:59:32.651281 6 log.go:172] (0xc001668790) (0xc001de5ae0) Stream added, broadcasting: 5 I0318 11:59:32.652197 6 log.go:172] (0xc001668790) Reply frame received for 5 I0318 11:59:33.731715 6 log.go:172] (0xc001668790) Data frame received for 5 I0318 11:59:33.731769 6 log.go:172] (0xc001de5ae0) (5) Data frame handling I0318 11:59:33.731808 6 log.go:172] (0xc001668790) Data frame received for 3 I0318 11:59:33.731833 6 log.go:172] (0xc001de5a40) (3) Data frame handling I0318 11:59:33.731862 6 log.go:172] (0xc001de5a40) (3) Data frame sent I0318 11:59:33.731875 6 log.go:172] (0xc001668790) Data frame received for 3 I0318 11:59:33.731896 6 log.go:172] (0xc001de5a40) (3) Data frame handling I0318 11:59:33.733739 6 log.go:172] (0xc001668790) Data frame received for 1 I0318 11:59:33.733772 6 log.go:172] (0xc001de59a0) (1) Data frame handling I0318 11:59:33.733794 6 log.go:172] (0xc001de59a0) (1) Data frame sent I0318 11:59:33.733815 6 log.go:172] (0xc001668790) (0xc001de59a0) Stream removed, broadcasting: 1 I0318 11:59:33.733830 6 log.go:172] (0xc001668790) Go away received I0318 11:59:33.734002 6 log.go:172] (0xc001668790) (0xc001de59a0) Stream removed, broadcasting: 1 I0318 11:59:33.734035 6 log.go:172] (0xc001668790) (0xc001de5a40) Stream removed, broadcasting: 3 I0318 11:59:33.734060 6 log.go:172] (0xc001668790) (0xc001de5ae0) Stream removed, broadcasting: 5 Mar 18 11:59:33.734: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 11:59:33.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xh7x4" for this suite. Mar 18 11:59:57.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 11:59:57.816: INFO: namespace: e2e-tests-pod-network-test-xh7x4, resource: bindings, ignored listing per whitelist Mar 18 11:59:57.834: INFO: namespace e2e-tests-pod-network-test-xh7x4 deletion completed in 24.095603632s • [SLOW TEST:50.572 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 11:59:57.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 18 11:59:57.946: INFO: Waiting up to 5m0s for pod "pod-fd1114f0-690f-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-d89ph" to be "success or failure" Mar 18 11:59:57.960: INFO: Pod "pod-fd1114f0-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.661785ms Mar 18 11:59:59.964: INFO: Pod "pod-fd1114f0-690f-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01863779s Mar 18 12:00:01.969: INFO: Pod "pod-fd1114f0-690f-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023353461s STEP: Saw pod success Mar 18 12:00:01.969: INFO: Pod "pod-fd1114f0-690f-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:00:01.972: INFO: Trying to get logs from node hunter-worker2 pod pod-fd1114f0-690f-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 12:00:02.017: INFO: Waiting for pod pod-fd1114f0-690f-11ea-9856-0242ac11000f to disappear Mar 18 12:00:02.028: INFO: Pod pod-fd1114f0-690f-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:00:02.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d89ph" for this suite. Mar 18 12:00:08.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:00:08.114: INFO: namespace: e2e-tests-emptydir-d89ph, resource: bindings, ignored listing per whitelist Mar 18 12:00:08.118: INFO: namespace e2e-tests-emptydir-d89ph deletion completed in 6.086667798s • [SLOW TEST:10.284 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:00:08.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 18 12:00:16.293: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 12:00:16.318: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 12:00:18.318: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 12:00:18.322: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 12:00:20.318: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 12:00:20.322: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 12:00:22.318: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 12:00:22.322: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:00:22.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8h878" for this suite. Mar 18 12:00:44.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:00:44.419: INFO: namespace: e2e-tests-container-lifecycle-hook-8h878, resource: bindings, ignored listing per whitelist Mar 18 12:00:44.442: INFO: namespace e2e-tests-container-lifecycle-hook-8h878 deletion completed in 22.115871997s • [SLOW TEST:36.324 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:00:44.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:00:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xpmh7" for this suite. Mar 18 12:01:06.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:01:06.637: INFO: namespace: e2e-tests-pods-xpmh7, resource: bindings, ignored listing per whitelist Mar 18 12:01:06.695: INFO: namespace e2e-tests-pods-xpmh7 deletion completed in 22.143636879s • [SLOW TEST:22.253 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:01:06.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 18 12:01:06.811: INFO: Waiting up to 5m0s for pod "pod-261bb7b1-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-6nhh8" to be "success or failure" Mar 18 12:01:06.843: INFO: Pod "pod-261bb7b1-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.984915ms Mar 18 12:01:08.846: INFO: Pod "pod-261bb7b1-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035468766s Mar 18 12:01:10.851: INFO: Pod "pod-261bb7b1-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040097553s STEP: Saw pod success Mar 18 12:01:10.851: INFO: Pod "pod-261bb7b1-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:01:10.854: INFO: Trying to get logs from node hunter-worker2 pod pod-261bb7b1-6910-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 12:01:10.892: INFO: Waiting for pod pod-261bb7b1-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:01:10.904: INFO: Pod pod-261bb7b1-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:01:10.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6nhh8" for this suite. Mar 18 12:01:16.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:01:16.976: INFO: namespace: e2e-tests-emptydir-6nhh8, resource: bindings, ignored listing per whitelist Mar 18 12:01:16.999: INFO: namespace e2e-tests-emptydir-6nhh8 deletion completed in 6.090125219s • [SLOW TEST:10.303 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:01:16.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:01:17.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-mk9tf" for this suite. Mar 18 12:01:23.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:01:23.272: INFO: namespace: e2e-tests-kubelet-test-mk9tf, resource: bindings, ignored listing per whitelist Mar 18 12:01:23.280: INFO: namespace e2e-tests-kubelet-test-mk9tf deletion completed in 6.093880471s • [SLOW TEST:6.281 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:01:23.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 18 12:01:31.438: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 12:01:31.444: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 12:01:33.444: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 12:01:33.449: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 12:01:35.444: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 12:01:35.449: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:01:35.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vtnx4" for this suite. Mar 18 12:01:57.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:01:57.569: INFO: namespace: e2e-tests-container-lifecycle-hook-vtnx4, resource: bindings, ignored listing per whitelist Mar 18 12:01:57.575: INFO: namespace e2e-tests-container-lifecycle-hook-vtnx4 deletion completed in 22.115783508s • [SLOW TEST:34.294 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:01:57.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 18 12:01:57.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:00.082: INFO: stderr: "" Mar 18 12:02:00.082: INFO: stdout: "pod/pause created\n" Mar 18 12:02:00.082: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 18 12:02:00.082: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-xrj87" to be "running and ready" Mar 18 12:02:00.098: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.378907ms Mar 18 12:02:02.102: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020263625s Mar 18 12:02:04.125: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.043510388s Mar 18 12:02:04.125: INFO: Pod "pause" satisfied condition "running and ready" Mar 18 12:02:04.125: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 18 12:02:04.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:04.233: INFO: stderr: "" Mar 18 12:02:04.233: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 18 12:02:04.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:04.346: INFO: stderr: "" Mar 18 12:02:04.346: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 18 12:02:04.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:04.445: INFO: stderr: "" Mar 18 12:02:04.445: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 18 12:02:04.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:04.539: INFO: stderr: "" Mar 18 12:02:04.539: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 18 12:02:04.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:04.646: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 12:02:04.646: INFO: stdout: "pod \"pause\" force deleted\n" Mar 18 12:02:04.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-xrj87' Mar 18 12:02:04.761: INFO: stderr: "No resources found.\n" Mar 18 12:02:04.761: INFO: stdout: "" Mar 18 12:02:04.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-xrj87 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 12:02:04.858: INFO: stderr: "" Mar 18 12:02:04.858: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:02:04.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xrj87" for this suite. Mar 18 12:02:10.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:02:11.007: INFO: namespace: e2e-tests-kubectl-xrj87, resource: bindings, ignored listing per whitelist Mar 18 12:02:11.013: INFO: namespace e2e-tests-kubectl-xrj87 deletion completed in 6.152219668s • [SLOW TEST:13.438 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:02:11.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 18 12:02:11.127: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix869181545/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:02:11.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-svpcb" for this suite. Mar 18 12:02:17.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:02:17.269: INFO: namespace: e2e-tests-kubectl-svpcb, resource: bindings, ignored listing per whitelist Mar 18 12:02:17.314: INFO: namespace e2e-tests-kubectl-svpcb deletion completed in 6.107455004s • [SLOW TEST:6.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:02:17.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-7wfx6/configmap-test-5038a5da-6910-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 12:02:17.465: INFO: Waiting up to 5m0s for pod "pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-7wfx6" to be "success or failure" Mar 18 12:02:17.475: INFO: Pod "pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.79197ms Mar 18 12:02:19.479: INFO: Pod "pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013744555s Mar 18 12:02:21.482: INFO: Pod "pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017626831s STEP: Saw pod success Mar 18 12:02:21.482: INFO: Pod "pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:02:21.485: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f container env-test: STEP: delete the pod Mar 18 12:02:21.527: INFO: Waiting for pod pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:02:21.559: INFO: Pod pod-configmaps-5039bf27-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:02:21.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7wfx6" for this suite. Mar 18 12:02:27.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:02:27.642: INFO: namespace: e2e-tests-configmap-7wfx6, resource: bindings, ignored listing per whitelist Mar 18 12:02:27.705: INFO: namespace e2e-tests-configmap-7wfx6 deletion completed in 6.117893917s • [SLOW TEST:10.391 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:02:27.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 18 12:02:34.854: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:02:35.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-mccvt" for this suite. Mar 18 12:02:57.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:02:57.972: INFO: namespace: e2e-tests-replicaset-mccvt, resource: bindings, ignored listing per whitelist Mar 18 12:02:57.983: INFO: namespace e2e-tests-replicaset-mccvt deletion completed in 22.110019128s • [SLOW TEST:30.278 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:02:57.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-vltt9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vltt9 to expose endpoints map[] Mar 18 12:02:58.155: INFO: Get endpoints failed (38.672434ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 18 12:02:59.159: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vltt9 exposes endpoints map[] (1.042627595s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-vltt9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vltt9 to expose endpoints map[pod1:[80]] Mar 18 12:03:02.206: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vltt9 exposes endpoints map[pod1:[80]] (3.039612466s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-vltt9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vltt9 to expose endpoints map[pod2:[80] pod1:[80]] Mar 18 12:03:05.354: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vltt9 exposes endpoints map[pod1:[80] pod2:[80]] (3.144839164s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-vltt9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vltt9 to expose endpoints map[pod2:[80]] Mar 18 12:03:06.419: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vltt9 exposes endpoints map[pod2:[80]] (1.061354076s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-vltt9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vltt9 to expose endpoints map[] Mar 18 12:03:07.545: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vltt9 exposes endpoints map[] (1.12244735s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:03:07.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-vltt9" for this suite. Mar 18 12:03:13.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:03:13.690: INFO: namespace: e2e-tests-services-vltt9, resource: bindings, ignored listing per whitelist Mar 18 12:03:13.721: INFO: namespace e2e-tests-services-vltt9 deletion completed in 6.112945684s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:15.738 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:03:13.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-71d24635-6910-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 12:03:13.831: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-b9vt9" to be "success or failure" Mar 18 12:03:13.835: INFO: Pod "pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77593ms Mar 18 12:03:15.839: INFO: Pod "pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00754075s Mar 18 12:03:17.843: INFO: Pod "pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011643852s STEP: Saw pod success Mar 18 12:03:17.843: INFO: Pod "pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:03:17.846: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 18 12:03:17.873: INFO: Waiting for pod pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:03:17.883: INFO: Pod pod-projected-secrets-71d2b33c-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:03:17.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b9vt9" for this suite. Mar 18 12:03:23.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:03:23.926: INFO: namespace: e2e-tests-projected-b9vt9, resource: bindings, ignored listing per whitelist Mar 18 12:03:23.979: INFO: namespace e2e-tests-projected-b9vt9 deletion completed in 6.09206136s • [SLOW TEST:10.257 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:03:23.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 18 12:03:31.267: INFO: 2 pods remaining Mar 18 12:03:31.267: INFO: 0 pods has nil DeletionTimestamp Mar 18 12:03:31.267: INFO: Mar 18 12:03:32.208: INFO: 0 pods remaining Mar 18 12:03:32.208: INFO: 0 pods has nil DeletionTimestamp Mar 18 12:03:32.208: INFO: STEP: Gathering metrics W0318 12:03:33.131142 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 12:03:33.131: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:03:33.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nr92x" for this suite. Mar 18 12:03:39.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:03:39.180: INFO: namespace: e2e-tests-gc-nr92x, resource: bindings, ignored listing per whitelist Mar 18 12:03:39.240: INFO: namespace e2e-tests-gc-nr92x deletion completed in 6.105756246s • [SLOW TEST:15.260 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:03:39.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-810849f1-6910-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 12:03:39.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-cw62s" to be "success or failure" Mar 18 12:03:39.376: INFO: Pod "pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.439852ms Mar 18 12:03:41.379: INFO: Pod "pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010548009s Mar 18 12:03:43.383: INFO: Pod "pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014215585s STEP: Saw pod success Mar 18 12:03:43.383: INFO: Pod "pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:03:43.385: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 12:03:43.438: INFO: Waiting for pod pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:03:43.441: INFO: Pod pod-configmaps-8108ef53-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:03:43.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cw62s" for this suite. Mar 18 12:03:49.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:03:49.487: INFO: namespace: e2e-tests-configmap-cw62s, resource: bindings, ignored listing per whitelist Mar 18 12:03:49.540: INFO: namespace e2e-tests-configmap-cw62s deletion completed in 6.096512324s • [SLOW TEST:10.301 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:03:49.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-872a8e51-6910-11ea-9856-0242ac11000f STEP: Creating a pod to test consume secrets Mar 18 12:03:49.653: INFO: Waiting up to 5m0s for pod "pod-secrets-872c8174-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-secrets-r5m9b" to be "success or failure" Mar 18 12:03:49.657: INFO: Pod "pod-secrets-872c8174-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276043ms Mar 18 12:03:51.661: INFO: Pod "pod-secrets-872c8174-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008133443s Mar 18 12:03:53.665: INFO: Pod "pod-secrets-872c8174-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01239962s STEP: Saw pod success Mar 18 12:03:53.665: INFO: Pod "pod-secrets-872c8174-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:03:53.668: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-872c8174-6910-11ea-9856-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 18 12:03:53.689: INFO: Waiting for pod pod-secrets-872c8174-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:03:53.693: INFO: Pod pod-secrets-872c8174-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:03:53.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-r5m9b" for this suite. Mar 18 12:03:59.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:03:59.767: INFO: namespace: e2e-tests-secrets-r5m9b, resource: bindings, ignored listing per whitelist Mar 18 12:03:59.804: INFO: namespace e2e-tests-secrets-r5m9b deletion completed in 6.107834468s • [SLOW TEST:10.263 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:03:59.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-8d47fce9-6910-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:04:03.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gdqql" for this suite. Mar 18 12:04:25.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:04:25.953: INFO: namespace: e2e-tests-configmap-gdqql, resource: bindings, ignored listing per whitelist Mar 18 12:04:26.017: INFO: namespace e2e-tests-configmap-gdqql deletion completed in 22.089459553s • [SLOW TEST:26.214 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:04:26.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9cf1e213-6910-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 12:04:26.181: INFO: Waiting up to 5m0s for pod "pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-htjm7" to be "success or failure" Mar 18 12:04:26.184: INFO: Pod "pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.549816ms Mar 18 12:04:28.189: INFO: Pod "pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007676307s Mar 18 12:04:30.192: INFO: Pod "pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011651538s STEP: Saw pod success Mar 18 12:04:30.193: INFO: Pod "pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:04:30.195: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 12:04:30.315: INFO: Waiting for pod pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:04:30.328: INFO: Pod pod-configmaps-9cf26584-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:04:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-htjm7" for this suite. Mar 18 12:04:36.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:04:36.405: INFO: namespace: e2e-tests-configmap-htjm7, resource: bindings, ignored listing per whitelist Mar 18 12:04:36.426: INFO: namespace e2e-tests-configmap-htjm7 deletion completed in 6.094236302s • [SLOW TEST:10.409 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:04:36.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 12:04:36.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-9b8dz" to be "success or failure" Mar 18 12:04:36.593: INFO: Pod "downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.916932ms Mar 18 12:04:38.636: INFO: Pod "downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052041021s Mar 18 12:04:40.640: INFO: Pod "downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056428931s STEP: Saw pod success Mar 18 12:04:40.640: INFO: Pod "downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:04:40.644: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 12:04:40.660: INFO: Waiting for pod downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f to disappear Mar 18 12:04:40.664: INFO: Pod downwardapi-volume-a325eec4-6910-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:04:40.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9b8dz" for this suite. Mar 18 12:04:46.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:04:46.728: INFO: namespace: e2e-tests-projected-9b8dz, resource: bindings, ignored listing per whitelist Mar 18 12:04:46.777: INFO: namespace e2e-tests-projected-9b8dz deletion completed in 6.109287849s • [SLOW TEST:10.350 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:04:46.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0318 12:04:47.948404 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 12:04:47.948: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:04:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2d9w8" for this suite. Mar 18 12:04:54.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:04:54.047: INFO: namespace: e2e-tests-gc-2d9w8, resource: bindings, ignored listing per whitelist Mar 18 12:04:54.114: INFO: namespace e2e-tests-gc-2d9w8 deletion completed in 6.163021478s • [SLOW TEST:7.337 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:04:54.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:04:58.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2vrrf" for this suite. Mar 18 12:05:36.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:05:36.322: INFO: namespace: e2e-tests-kubelet-test-2vrrf, resource: bindings, ignored listing per whitelist Mar 18 12:05:36.362: INFO: namespace e2e-tests-kubelet-test-2vrrf deletion completed in 38.104336583s • [SLOW TEST:42.248 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:05:36.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 18 12:05:36.957: INFO: created pod pod-service-account-defaultsa Mar 18 12:05:36.957: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 18 12:05:36.964: INFO: created pod pod-service-account-mountsa Mar 18 12:05:36.964: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 18 12:05:36.970: INFO: created pod pod-service-account-nomountsa Mar 18 12:05:36.970: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 18 12:05:36.987: INFO: created pod pod-service-account-defaultsa-mountspec Mar 18 12:05:36.987: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 18 12:05:37.019: INFO: created pod pod-service-account-mountsa-mountspec Mar 18 12:05:37.019: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 18 12:05:37.031: INFO: created pod pod-service-account-nomountsa-mountspec Mar 18 12:05:37.031: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 18 12:05:37.077: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 18 12:05:37.077: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 18 12:05:37.104: INFO: created pod pod-service-account-mountsa-nomountspec Mar 18 12:05:37.104: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 18 12:05:37.134: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 18 12:05:37.134: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:05:37.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-rjv6p" for this suite. Mar 18 12:06:03.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:06:03.307: INFO: namespace: e2e-tests-svcaccounts-rjv6p, resource: bindings, ignored listing per whitelist Mar 18 12:06:03.339: INFO: namespace e2e-tests-svcaccounts-rjv6p deletion completed in 26.10659865s • [SLOW TEST:26.976 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:06:03.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:06:07.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-vspv5" for this suite. Mar 18 12:06:53.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:06:53.485: INFO: namespace: e2e-tests-kubelet-test-vspv5, resource: bindings, ignored listing per whitelist Mar 18 12:06:53.550: INFO: namespace e2e-tests-kubelet-test-vspv5 deletion completed in 46.095580004s • [SLOW TEST:50.211 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:06:53.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 18 12:06:53.655: INFO: PodSpec: initContainers in spec.initContainers Mar 18 12:07:40.010: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f4da2fa9-6910-11ea-9856-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-init-container-bdc4g", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-bdc4g/pods/pod-init-f4da2fa9-6910-11ea-9856-0242ac11000f", UID:"f4dbdcce-6910-11ea-99e8-0242ac110002", ResourceVersion:"500468", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720130013, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"655447369"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tlgm2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000bc8080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tlgm2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tlgm2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tlgm2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001981c28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019e49c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001981cb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001981d00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001981d08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001981d0c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130013, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130013, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130013, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130013, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.51", StartTime:(*v1.Time)(0xc000a952e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000a95320), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0012ccd20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://be7131d7ca020dff725911d2eeb6f42e7c2d26aef2dd66c2a34f2c8c08891320"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000a953c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000a95300), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:07:40.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-bdc4g" for this suite. Mar 18 12:08:02.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:08:02.227: INFO: namespace: e2e-tests-init-container-bdc4g, resource: bindings, ignored listing per whitelist Mar 18 12:08:02.241: INFO: namespace e2e-tests-init-container-bdc4g deletion completed in 22.17553553s • [SLOW TEST:68.691 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:08:02.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-tlbwx [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 18 12:08:02.356: INFO: Found 0 stateful pods, waiting for 3 Mar 18 12:08:12.360: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 12:08:12.360: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 12:08:12.360: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 18 12:08:12.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tlbwx ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 12:08:12.632: INFO: stderr: "I0318 12:08:12.510798 2626 log.go:172] (0xc0007942c0) (0xc0006f4640) Create stream\nI0318 12:08:12.510858 2626 log.go:172] (0xc0007942c0) (0xc0006f4640) Stream added, broadcasting: 1\nI0318 12:08:12.513856 2626 log.go:172] (0xc0007942c0) Reply frame received for 1\nI0318 12:08:12.513903 2626 log.go:172] (0xc0007942c0) (0xc0000f0c80) Create stream\nI0318 12:08:12.513919 2626 log.go:172] (0xc0007942c0) (0xc0000f0c80) Stream added, broadcasting: 3\nI0318 12:08:12.514993 2626 log.go:172] (0xc0007942c0) Reply frame received for 3\nI0318 12:08:12.515041 2626 log.go:172] (0xc0007942c0) (0xc0006bc000) Create stream\nI0318 12:08:12.515058 2626 log.go:172] (0xc0007942c0) (0xc0006bc000) Stream added, broadcasting: 5\nI0318 12:08:12.516102 2626 log.go:172] (0xc0007942c0) Reply frame received for 5\nI0318 12:08:12.626090 2626 log.go:172] (0xc0007942c0) Data frame received for 3\nI0318 12:08:12.626120 2626 log.go:172] (0xc0000f0c80) (3) Data frame handling\nI0318 12:08:12.626134 2626 log.go:172] (0xc0000f0c80) (3) Data frame sent\nI0318 12:08:12.626142 2626 log.go:172] (0xc0007942c0) Data frame received for 3\nI0318 12:08:12.626149 2626 log.go:172] (0xc0000f0c80) (3) Data frame handling\nI0318 12:08:12.626572 2626 log.go:172] (0xc0007942c0) Data frame received for 5\nI0318 12:08:12.626606 2626 log.go:172] (0xc0006bc000) (5) Data frame handling\nI0318 12:08:12.628134 2626 log.go:172] (0xc0007942c0) Data frame received for 1\nI0318 12:08:12.628146 2626 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0318 12:08:12.628152 2626 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0318 12:08:12.628328 2626 log.go:172] (0xc0007942c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0318 12:08:12.628385 2626 log.go:172] (0xc0007942c0) Go away received\nI0318 12:08:12.628547 2626 log.go:172] (0xc0007942c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0318 12:08:12.628564 2626 log.go:172] (0xc0007942c0) (0xc0000f0c80) Stream removed, broadcasting: 3\nI0318 12:08:12.628571 2626 log.go:172] (0xc0007942c0) (0xc0006bc000) Stream removed, broadcasting: 5\n" Mar 18 12:08:12.632: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 12:08:12.632: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 18 12:08:22.667: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 18 12:08:32.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tlbwx ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:08:32.959: INFO: stderr: "I0318 12:08:32.864659 2649 log.go:172] (0xc0001386e0) (0xc000623360) Create stream\nI0318 12:08:32.864718 2649 log.go:172] (0xc0001386e0) (0xc000623360) Stream added, broadcasting: 1\nI0318 12:08:32.866829 2649 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0318 12:08:32.866873 2649 log.go:172] (0xc0001386e0) (0xc000716000) Create stream\nI0318 12:08:32.866885 2649 log.go:172] (0xc0001386e0) (0xc000716000) Stream added, broadcasting: 3\nI0318 12:08:32.867911 2649 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0318 12:08:32.867960 2649 log.go:172] (0xc0001386e0) (0xc000716140) Create stream\nI0318 12:08:32.867977 2649 log.go:172] (0xc0001386e0) (0xc000716140) Stream added, broadcasting: 5\nI0318 12:08:32.869060 2649 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0318 12:08:32.952599 2649 log.go:172] (0xc0001386e0) Data frame received for 3\nI0318 12:08:32.952645 2649 log.go:172] (0xc000716000) (3) Data frame handling\nI0318 12:08:32.952679 2649 log.go:172] (0xc000716000) (3) Data frame sent\nI0318 12:08:32.952698 2649 log.go:172] (0xc0001386e0) Data frame received for 3\nI0318 12:08:32.952711 2649 log.go:172] (0xc000716000) (3) Data frame handling\nI0318 12:08:32.952761 2649 log.go:172] (0xc0001386e0) Data frame received for 5\nI0318 12:08:32.952795 2649 log.go:172] (0xc000716140) (5) Data frame handling\nI0318 12:08:32.954993 2649 log.go:172] (0xc0001386e0) Data frame received for 1\nI0318 12:08:32.955010 2649 log.go:172] (0xc000623360) (1) Data frame handling\nI0318 12:08:32.955017 2649 log.go:172] (0xc000623360) (1) Data frame sent\nI0318 12:08:32.955024 2649 log.go:172] (0xc0001386e0) (0xc000623360) Stream removed, broadcasting: 1\nI0318 12:08:32.955154 2649 log.go:172] (0xc0001386e0) (0xc000623360) Stream removed, broadcasting: 1\nI0318 12:08:32.955164 2649 log.go:172] (0xc0001386e0) (0xc000716000) Stream removed, broadcasting: 3\nI0318 12:08:32.955306 2649 log.go:172] (0xc0001386e0) Go away received\nI0318 12:08:32.955343 2649 log.go:172] (0xc0001386e0) (0xc000716140) Stream removed, broadcasting: 5\n" Mar 18 12:08:32.959: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 12:08:32.959: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 12:08:42.981: INFO: Waiting for StatefulSet e2e-tests-statefulset-tlbwx/ss2 to complete update Mar 18 12:08:42.981: INFO: Waiting for Pod e2e-tests-statefulset-tlbwx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 12:08:42.981: INFO: Waiting for Pod e2e-tests-statefulset-tlbwx/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 12:08:52.989: INFO: Waiting for StatefulSet e2e-tests-statefulset-tlbwx/ss2 to complete update Mar 18 12:08:52.989: INFO: Waiting for Pod e2e-tests-statefulset-tlbwx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 12:09:03.035: INFO: Waiting for StatefulSet e2e-tests-statefulset-tlbwx/ss2 to complete update STEP: Rolling back to a previous revision Mar 18 12:09:12.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tlbwx ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 12:09:13.228: INFO: stderr: "I0318 12:09:13.111252 2671 log.go:172] (0xc000162840) (0xc0000175e0) Create stream\nI0318 12:09:13.111314 2671 log.go:172] (0xc000162840) (0xc0000175e0) Stream added, broadcasting: 1\nI0318 12:09:13.114212 2671 log.go:172] (0xc000162840) Reply frame received for 1\nI0318 12:09:13.114266 2671 log.go:172] (0xc000162840) (0xc00073e000) Create stream\nI0318 12:09:13.114286 2671 log.go:172] (0xc000162840) (0xc00073e000) Stream added, broadcasting: 3\nI0318 12:09:13.115272 2671 log.go:172] (0xc000162840) Reply frame received for 3\nI0318 12:09:13.115310 2671 log.go:172] (0xc000162840) (0xc0003be000) Create stream\nI0318 12:09:13.115326 2671 log.go:172] (0xc000162840) (0xc0003be000) Stream added, broadcasting: 5\nI0318 12:09:13.116399 2671 log.go:172] (0xc000162840) Reply frame received for 5\nI0318 12:09:13.221596 2671 log.go:172] (0xc000162840) Data frame received for 3\nI0318 12:09:13.221731 2671 log.go:172] (0xc00073e000) (3) Data frame handling\nI0318 12:09:13.221751 2671 log.go:172] (0xc00073e000) (3) Data frame sent\nI0318 12:09:13.221774 2671 log.go:172] (0xc000162840) Data frame received for 5\nI0318 12:09:13.221827 2671 log.go:172] (0xc0003be000) (5) Data frame handling\nI0318 12:09:13.221881 2671 log.go:172] (0xc000162840) Data frame received for 3\nI0318 12:09:13.221905 2671 log.go:172] (0xc00073e000) (3) Data frame handling\nI0318 12:09:13.223777 2671 log.go:172] (0xc000162840) Data frame received for 1\nI0318 12:09:13.223804 2671 log.go:172] (0xc0000175e0) (1) Data frame handling\nI0318 12:09:13.223820 2671 log.go:172] (0xc0000175e0) (1) Data frame sent\nI0318 12:09:13.223845 2671 log.go:172] (0xc000162840) (0xc0000175e0) Stream removed, broadcasting: 1\nI0318 12:09:13.223868 2671 log.go:172] (0xc000162840) Go away received\nI0318 12:09:13.224130 2671 log.go:172] (0xc000162840) (0xc0000175e0) Stream removed, broadcasting: 1\nI0318 12:09:13.224150 2671 log.go:172] (0xc000162840) (0xc00073e000) Stream removed, broadcasting: 3\nI0318 12:09:13.224158 2671 log.go:172] (0xc000162840) (0xc0003be000) Stream removed, broadcasting: 5\n" Mar 18 12:09:13.228: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 12:09:13.228: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 12:09:13.292: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 18 12:09:23.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tlbwx ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:09:23.558: INFO: stderr: "I0318 12:09:23.459904 2694 log.go:172] (0xc0008342c0) (0xc0005e52c0) Create stream\nI0318 12:09:23.459962 2694 log.go:172] (0xc0008342c0) (0xc0005e52c0) Stream added, broadcasting: 1\nI0318 12:09:23.462520 2694 log.go:172] (0xc0008342c0) Reply frame received for 1\nI0318 12:09:23.462585 2694 log.go:172] (0xc0008342c0) (0xc00043e000) Create stream\nI0318 12:09:23.462605 2694 log.go:172] (0xc0008342c0) (0xc00043e000) Stream added, broadcasting: 3\nI0318 12:09:23.463596 2694 log.go:172] (0xc0008342c0) Reply frame received for 3\nI0318 12:09:23.463641 2694 log.go:172] (0xc0008342c0) (0xc0005e5360) Create stream\nI0318 12:09:23.463657 2694 log.go:172] (0xc0008342c0) (0xc0005e5360) Stream added, broadcasting: 5\nI0318 12:09:23.464745 2694 log.go:172] (0xc0008342c0) Reply frame received for 5\nI0318 12:09:23.553966 2694 log.go:172] (0xc0008342c0) Data frame received for 5\nI0318 12:09:23.554006 2694 log.go:172] (0xc0005e5360) (5) Data frame handling\nI0318 12:09:23.554027 2694 log.go:172] (0xc0008342c0) Data frame received for 3\nI0318 12:09:23.554032 2694 log.go:172] (0xc00043e000) (3) Data frame handling\nI0318 12:09:23.554039 2694 log.go:172] (0xc00043e000) (3) Data frame sent\nI0318 12:09:23.554045 2694 log.go:172] (0xc0008342c0) Data frame received for 3\nI0318 12:09:23.554049 2694 log.go:172] (0xc00043e000) (3) Data frame handling\nI0318 12:09:23.555451 2694 log.go:172] (0xc0008342c0) Data frame received for 1\nI0318 12:09:23.555468 2694 log.go:172] (0xc0005e52c0) (1) Data frame handling\nI0318 12:09:23.555478 2694 log.go:172] (0xc0005e52c0) (1) Data frame sent\nI0318 12:09:23.555491 2694 log.go:172] (0xc0008342c0) (0xc0005e52c0) Stream removed, broadcasting: 1\nI0318 12:09:23.555549 2694 log.go:172] (0xc0008342c0) Go away received\nI0318 12:09:23.555654 2694 log.go:172] (0xc0008342c0) (0xc0005e52c0) Stream removed, broadcasting: 1\nI0318 12:09:23.555669 2694 log.go:172] (0xc0008342c0) (0xc00043e000) Stream removed, broadcasting: 3\nI0318 12:09:23.555677 2694 log.go:172] (0xc0008342c0) (0xc0005e5360) Stream removed, broadcasting: 5\n" Mar 18 12:09:23.558: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 12:09:23.558: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 12:09:33.579: INFO: Waiting for StatefulSet e2e-tests-statefulset-tlbwx/ss2 to complete update Mar 18 12:09:33.579: INFO: Waiting for Pod e2e-tests-statefulset-tlbwx/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 18 12:09:33.579: INFO: Waiting for Pod e2e-tests-statefulset-tlbwx/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 18 12:09:43.587: INFO: Waiting for StatefulSet e2e-tests-statefulset-tlbwx/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 18 12:09:53.587: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tlbwx Mar 18 12:09:53.590: INFO: Scaling statefulset ss2 to 0 Mar 18 12:10:03.611: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 12:10:03.614: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:10:03.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-tlbwx" for this suite. Mar 18 12:10:11.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:10:11.752: INFO: namespace: e2e-tests-statefulset-tlbwx, resource: bindings, ignored listing per whitelist Mar 18 12:10:11.762: INFO: namespace e2e-tests-statefulset-tlbwx deletion completed in 8.103159246s • [SLOW TEST:129.521 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:10:11.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 12:10:11.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-gw8vm' Mar 18 12:10:11.964: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 12:10:11.964: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 18 12:10:11.991: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j4g5t] Mar 18 12:10:11.991: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j4g5t" in namespace "e2e-tests-kubectl-gw8vm" to be "running and ready" Mar 18 12:10:12.035: INFO: Pod "e2e-test-nginx-rc-j4g5t": Phase="Pending", Reason="", readiness=false. Elapsed: 44.55124ms Mar 18 12:10:14.038: INFO: Pod "e2e-test-nginx-rc-j4g5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047523728s Mar 18 12:10:16.042: INFO: Pod "e2e-test-nginx-rc-j4g5t": Phase="Running", Reason="", readiness=true. Elapsed: 4.051559525s Mar 18 12:10:16.042: INFO: Pod "e2e-test-nginx-rc-j4g5t" satisfied condition "running and ready" Mar 18 12:10:16.042: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j4g5t] Mar 18 12:10:16.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gw8vm' Mar 18 12:10:16.166: INFO: stderr: "" Mar 18 12:10:16.166: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 18 12:10:16.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gw8vm' Mar 18 12:10:16.275: INFO: stderr: "" Mar 18 12:10:16.275: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:10:16.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gw8vm" for this suite. Mar 18 12:10:38.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:10:38.326: INFO: namespace: e2e-tests-kubectl-gw8vm, resource: bindings, ignored listing per whitelist Mar 18 12:10:38.375: INFO: namespace e2e-tests-kubectl-gw8vm deletion completed in 22.096027102s • [SLOW TEST:26.613 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:10:38.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:10:44.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-rcb79" for this suite. Mar 18 12:10:50.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:10:50.683: INFO: namespace: e2e-tests-namespaces-rcb79, resource: bindings, ignored listing per whitelist Mar 18 12:10:50.740: INFO: namespace e2e-tests-namespaces-rcb79 deletion completed in 6.087190011s STEP: Destroying namespace "e2e-tests-nsdeletetest-62vvg" for this suite. Mar 18 12:10:50.742: INFO: Namespace e2e-tests-nsdeletetest-62vvg was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-wzwlg" for this suite. Mar 18 12:10:56.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:10:56.816: INFO: namespace: e2e-tests-nsdeletetest-wzwlg, resource: bindings, ignored listing per whitelist Mar 18 12:10:56.849: INFO: namespace e2e-tests-nsdeletetest-wzwlg deletion completed in 6.106911323s • [SLOW TEST:18.473 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:10:56.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-ntq2l [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-ntq2l STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-ntq2l Mar 18 12:10:56.976: INFO: Found 0 stateful pods, waiting for 1 Mar 18 12:11:06.980: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 18 12:11:06.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 12:11:07.275: INFO: stderr: "I0318 12:11:07.112050 2784 log.go:172] (0xc000780160) (0xc0006a85a0) Create stream\nI0318 12:11:07.112104 2784 log.go:172] (0xc000780160) (0xc0006a85a0) Stream added, broadcasting: 1\nI0318 12:11:07.114807 2784 log.go:172] (0xc000780160) Reply frame received for 1\nI0318 12:11:07.114863 2784 log.go:172] (0xc000780160) (0xc0006a8640) Create stream\nI0318 12:11:07.114885 2784 log.go:172] (0xc000780160) (0xc0006a8640) Stream added, broadcasting: 3\nI0318 12:11:07.115918 2784 log.go:172] (0xc000780160) Reply frame received for 3\nI0318 12:11:07.115956 2784 log.go:172] (0xc000780160) (0xc00053cc80) Create stream\nI0318 12:11:07.115970 2784 log.go:172] (0xc000780160) (0xc00053cc80) Stream added, broadcasting: 5\nI0318 12:11:07.116902 2784 log.go:172] (0xc000780160) Reply frame received for 5\nI0318 12:11:07.269625 2784 log.go:172] (0xc000780160) Data frame received for 3\nI0318 12:11:07.269673 2784 log.go:172] (0xc0006a8640) (3) Data frame handling\nI0318 12:11:07.269695 2784 log.go:172] (0xc0006a8640) (3) Data frame sent\nI0318 12:11:07.269999 2784 log.go:172] (0xc000780160) Data frame received for 3\nI0318 12:11:07.270031 2784 log.go:172] (0xc0006a8640) (3) Data frame handling\nI0318 12:11:07.270061 2784 log.go:172] (0xc000780160) Data frame received for 5\nI0318 12:11:07.270100 2784 log.go:172] (0xc00053cc80) (5) Data frame handling\nI0318 12:11:07.271971 2784 log.go:172] (0xc000780160) Data frame received for 1\nI0318 12:11:07.271986 2784 log.go:172] (0xc0006a85a0) (1) Data frame handling\nI0318 12:11:07.271997 2784 log.go:172] (0xc0006a85a0) (1) Data frame sent\nI0318 12:11:07.272007 2784 log.go:172] (0xc000780160) (0xc0006a85a0) Stream removed, broadcasting: 1\nI0318 12:11:07.272119 2784 log.go:172] (0xc000780160) Go away received\nI0318 12:11:07.272156 2784 log.go:172] (0xc000780160) (0xc0006a85a0) Stream removed, broadcasting: 1\nI0318 12:11:07.272178 2784 log.go:172] (0xc000780160) (0xc0006a8640) Stream removed, broadcasting: 3\nI0318 12:11:07.272184 2784 log.go:172] (0xc000780160) (0xc00053cc80) Stream removed, broadcasting: 5\n" Mar 18 12:11:07.275: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 12:11:07.275: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 12:11:07.279: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 18 12:11:17.283: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 12:11:17.283: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 12:11:17.298: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:17.298: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:56 +0000 UTC }] Mar 18 12:11:17.298: INFO: Mar 18 12:11:17.298: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 18 12:11:18.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995585501s Mar 18 12:11:19.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991351372s Mar 18 12:11:20.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932497958s Mar 18 12:11:21.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927678511s Mar 18 12:11:22.376: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.922883861s Mar 18 12:11:23.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.917336902s Mar 18 12:11:24.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.912149261s Mar 18 12:11:25.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.906681997s Mar 18 12:11:26.397: INFO: Verifying statefulset ss doesn't scale past 3 for another 901.653969ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-ntq2l Mar 18 12:11:27.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:11:27.586: INFO: stderr: "I0318 12:11:27.529234 2807 log.go:172] (0xc0008282c0) (0xc00072a640) Create stream\nI0318 12:11:27.529284 2807 log.go:172] (0xc0008282c0) (0xc00072a640) Stream added, broadcasting: 1\nI0318 12:11:27.534291 2807 log.go:172] (0xc0008282c0) Reply frame received for 1\nI0318 12:11:27.534346 2807 log.go:172] (0xc0008282c0) (0xc000688c80) Create stream\nI0318 12:11:27.534357 2807 log.go:172] (0xc0008282c0) (0xc000688c80) Stream added, broadcasting: 3\nI0318 12:11:27.535276 2807 log.go:172] (0xc0008282c0) Reply frame received for 3\nI0318 12:11:27.535300 2807 log.go:172] (0xc0008282c0) (0xc000362000) Create stream\nI0318 12:11:27.535307 2807 log.go:172] (0xc0008282c0) (0xc000362000) Stream added, broadcasting: 5\nI0318 12:11:27.536322 2807 log.go:172] (0xc0008282c0) Reply frame received for 5\nI0318 12:11:27.581827 2807 log.go:172] (0xc0008282c0) Data frame received for 3\nI0318 12:11:27.581865 2807 log.go:172] (0xc000688c80) (3) Data frame handling\nI0318 12:11:27.581888 2807 log.go:172] (0xc000688c80) (3) Data frame sent\nI0318 12:11:27.581902 2807 log.go:172] (0xc0008282c0) Data frame received for 5\nI0318 12:11:27.581913 2807 log.go:172] (0xc000362000) (5) Data frame handling\nI0318 12:11:27.582010 2807 log.go:172] (0xc0008282c0) Data frame received for 3\nI0318 12:11:27.582035 2807 log.go:172] (0xc000688c80) (3) Data frame handling\nI0318 12:11:27.583606 2807 log.go:172] (0xc0008282c0) Data frame received for 1\nI0318 12:11:27.583622 2807 log.go:172] (0xc00072a640) (1) Data frame handling\nI0318 12:11:27.583632 2807 log.go:172] (0xc00072a640) (1) Data frame sent\nI0318 12:11:27.583642 2807 log.go:172] (0xc0008282c0) (0xc00072a640) Stream removed, broadcasting: 1\nI0318 12:11:27.583742 2807 log.go:172] (0xc0008282c0) Go away received\nI0318 12:11:27.583796 2807 log.go:172] (0xc0008282c0) (0xc00072a640) Stream removed, broadcasting: 1\nI0318 12:11:27.583813 2807 log.go:172] (0xc0008282c0) (0xc000688c80) Stream removed, broadcasting: 3\nI0318 12:11:27.583823 2807 log.go:172] (0xc0008282c0) (0xc000362000) Stream removed, broadcasting: 5\n" Mar 18 12:11:27.587: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 12:11:27.587: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 12:11:27.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:11:27.806: INFO: stderr: "I0318 12:11:27.722647 2829 log.go:172] (0xc000138790) (0xc00067f360) Create stream\nI0318 12:11:27.722706 2829 log.go:172] (0xc000138790) (0xc00067f360) Stream added, broadcasting: 1\nI0318 12:11:27.725570 2829 log.go:172] (0xc000138790) Reply frame received for 1\nI0318 12:11:27.725613 2829 log.go:172] (0xc000138790) (0xc00067f400) Create stream\nI0318 12:11:27.725629 2829 log.go:172] (0xc000138790) (0xc00067f400) Stream added, broadcasting: 3\nI0318 12:11:27.726639 2829 log.go:172] (0xc000138790) Reply frame received for 3\nI0318 12:11:27.726688 2829 log.go:172] (0xc000138790) (0xc000126000) Create stream\nI0318 12:11:27.726705 2829 log.go:172] (0xc000138790) (0xc000126000) Stream added, broadcasting: 5\nI0318 12:11:27.727549 2829 log.go:172] (0xc000138790) Reply frame received for 5\nI0318 12:11:27.799936 2829 log.go:172] (0xc000138790) Data frame received for 5\nI0318 12:11:27.799957 2829 log.go:172] (0xc000126000) (5) Data frame handling\nI0318 12:11:27.799965 2829 log.go:172] (0xc000126000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0318 12:11:27.800000 2829 log.go:172] (0xc000138790) Data frame received for 5\nI0318 12:11:27.800008 2829 log.go:172] (0xc000126000) (5) Data frame handling\nI0318 12:11:27.800025 2829 log.go:172] (0xc000138790) Data frame received for 3\nI0318 12:11:27.800033 2829 log.go:172] (0xc00067f400) (3) Data frame handling\nI0318 12:11:27.800041 2829 log.go:172] (0xc00067f400) (3) Data frame sent\nI0318 12:11:27.800049 2829 log.go:172] (0xc000138790) Data frame received for 3\nI0318 12:11:27.800056 2829 log.go:172] (0xc00067f400) (3) Data frame handling\nI0318 12:11:27.802029 2829 log.go:172] (0xc000138790) Data frame received for 1\nI0318 12:11:27.802048 2829 log.go:172] (0xc00067f360) (1) Data frame handling\nI0318 12:11:27.802064 2829 log.go:172] (0xc00067f360) (1) Data frame sent\nI0318 12:11:27.802072 2829 log.go:172] (0xc000138790) (0xc00067f360) Stream removed, broadcasting: 1\nI0318 12:11:27.802198 2829 log.go:172] (0xc000138790) (0xc00067f360) Stream removed, broadcasting: 1\nI0318 12:11:27.802211 2829 log.go:172] (0xc000138790) (0xc00067f400) Stream removed, broadcasting: 3\nI0318 12:11:27.802218 2829 log.go:172] (0xc000138790) (0xc000126000) Stream removed, broadcasting: 5\nI0318 12:11:27.802243 2829 log.go:172] (0xc000138790) Go away received\n" Mar 18 12:11:27.806: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 12:11:27.806: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 12:11:27.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:11:28.003: INFO: stderr: "I0318 12:11:27.940334 2853 log.go:172] (0xc000154790) (0xc000315540) Create stream\nI0318 12:11:27.940400 2853 log.go:172] (0xc000154790) (0xc000315540) Stream added, broadcasting: 1\nI0318 12:11:27.943353 2853 log.go:172] (0xc000154790) Reply frame received for 1\nI0318 12:11:27.943403 2853 log.go:172] (0xc000154790) (0xc0001caa00) Create stream\nI0318 12:11:27.943422 2853 log.go:172] (0xc000154790) (0xc0001caa00) Stream added, broadcasting: 3\nI0318 12:11:27.944270 2853 log.go:172] (0xc000154790) Reply frame received for 3\nI0318 12:11:27.944308 2853 log.go:172] (0xc000154790) (0xc000912000) Create stream\nI0318 12:11:27.944320 2853 log.go:172] (0xc000154790) (0xc000912000) Stream added, broadcasting: 5\nI0318 12:11:27.945012 2853 log.go:172] (0xc000154790) Reply frame received for 5\nI0318 12:11:27.996258 2853 log.go:172] (0xc000154790) Data frame received for 5\nI0318 12:11:27.996320 2853 log.go:172] (0xc000154790) Data frame received for 3\nI0318 12:11:27.996362 2853 log.go:172] (0xc0001caa00) (3) Data frame handling\nI0318 12:11:27.996386 2853 log.go:172] (0xc0001caa00) (3) Data frame sent\nI0318 12:11:27.996403 2853 log.go:172] (0xc000154790) Data frame received for 3\nI0318 12:11:27.996417 2853 log.go:172] (0xc0001caa00) (3) Data frame handling\nI0318 12:11:27.996459 2853 log.go:172] (0xc000912000) (5) Data frame handling\nI0318 12:11:27.996495 2853 log.go:172] (0xc000912000) (5) Data frame sent\nI0318 12:11:27.996517 2853 log.go:172] (0xc000154790) Data frame received for 5\nI0318 12:11:27.996539 2853 log.go:172] (0xc000912000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0318 12:11:27.999062 2853 log.go:172] (0xc000154790) Data frame received for 1\nI0318 12:11:27.999099 2853 log.go:172] (0xc000315540) (1) Data frame handling\nI0318 12:11:27.999140 2853 log.go:172] (0xc000315540) (1) Data frame sent\nI0318 12:11:27.999175 2853 log.go:172] (0xc000154790) (0xc000315540) Stream removed, broadcasting: 1\nI0318 12:11:27.999444 2853 log.go:172] (0xc000154790) (0xc000315540) Stream removed, broadcasting: 1\nI0318 12:11:27.999468 2853 log.go:172] (0xc000154790) (0xc0001caa00) Stream removed, broadcasting: 3\nI0318 12:11:27.999476 2853 log.go:172] (0xc000154790) (0xc000912000) Stream removed, broadcasting: 5\n" Mar 18 12:11:28.003: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 12:11:28.003: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 12:11:28.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 12:11:28.008: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 12:11:28.008: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 18 12:11:28.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 12:11:28.223: INFO: stderr: "I0318 12:11:28.129758 2875 log.go:172] (0xc000854160) (0xc000692640) Create stream\nI0318 12:11:28.129835 2875 log.go:172] (0xc000854160) (0xc000692640) Stream added, broadcasting: 1\nI0318 12:11:28.132669 2875 log.go:172] (0xc000854160) Reply frame received for 1\nI0318 12:11:28.132722 2875 log.go:172] (0xc000854160) (0xc0003badc0) Create stream\nI0318 12:11:28.132740 2875 log.go:172] (0xc000854160) (0xc0003badc0) Stream added, broadcasting: 3\nI0318 12:11:28.133918 2875 log.go:172] (0xc000854160) Reply frame received for 3\nI0318 12:11:28.133982 2875 log.go:172] (0xc000854160) (0xc0003baf00) Create stream\nI0318 12:11:28.134014 2875 log.go:172] (0xc000854160) (0xc0003baf00) Stream added, broadcasting: 5\nI0318 12:11:28.135088 2875 log.go:172] (0xc000854160) Reply frame received for 5\nI0318 12:11:28.217740 2875 log.go:172] (0xc000854160) Data frame received for 5\nI0318 12:11:28.217761 2875 log.go:172] (0xc0003baf00) (5) Data frame handling\nI0318 12:11:28.217801 2875 log.go:172] (0xc000854160) Data frame received for 3\nI0318 12:11:28.217830 2875 log.go:172] (0xc0003badc0) (3) Data frame handling\nI0318 12:11:28.217852 2875 log.go:172] (0xc0003badc0) (3) Data frame sent\nI0318 12:11:28.217869 2875 log.go:172] (0xc000854160) Data frame received for 3\nI0318 12:11:28.217886 2875 log.go:172] (0xc0003badc0) (3) Data frame handling\nI0318 12:11:28.219726 2875 log.go:172] (0xc000854160) Data frame received for 1\nI0318 12:11:28.219759 2875 log.go:172] (0xc000692640) (1) Data frame handling\nI0318 12:11:28.219782 2875 log.go:172] (0xc000692640) (1) Data frame sent\nI0318 12:11:28.219802 2875 log.go:172] (0xc000854160) (0xc000692640) Stream removed, broadcasting: 1\nI0318 12:11:28.219835 2875 log.go:172] (0xc000854160) Go away received\nI0318 12:11:28.220047 2875 log.go:172] (0xc000854160) (0xc000692640) Stream removed, broadcasting: 1\nI0318 12:11:28.220067 2875 log.go:172] (0xc000854160) (0xc0003badc0) Stream removed, broadcasting: 3\nI0318 12:11:28.220089 2875 log.go:172] (0xc000854160) (0xc0003baf00) Stream removed, broadcasting: 5\n" Mar 18 12:11:28.223: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 12:11:28.223: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 12:11:28.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 12:11:28.459: INFO: stderr: "I0318 12:11:28.346904 2898 log.go:172] (0xc0008302c0) (0xc000700640) Create stream\nI0318 12:11:28.346961 2898 log.go:172] (0xc0008302c0) (0xc000700640) Stream added, broadcasting: 1\nI0318 12:11:28.351694 2898 log.go:172] (0xc0008302c0) Reply frame received for 1\nI0318 12:11:28.351773 2898 log.go:172] (0xc0008302c0) (0xc0005dec80) Create stream\nI0318 12:11:28.351798 2898 log.go:172] (0xc0008302c0) (0xc0005dec80) Stream added, broadcasting: 3\nI0318 12:11:28.353646 2898 log.go:172] (0xc0008302c0) Reply frame received for 3\nI0318 12:11:28.353691 2898 log.go:172] (0xc0008302c0) (0xc0005dedc0) Create stream\nI0318 12:11:28.353703 2898 log.go:172] (0xc0008302c0) (0xc0005dedc0) Stream added, broadcasting: 5\nI0318 12:11:28.355088 2898 log.go:172] (0xc0008302c0) Reply frame received for 5\nI0318 12:11:28.452360 2898 log.go:172] (0xc0008302c0) Data frame received for 3\nI0318 12:11:28.452427 2898 log.go:172] (0xc0005dec80) (3) Data frame handling\nI0318 12:11:28.452452 2898 log.go:172] (0xc0005dec80) (3) Data frame sent\nI0318 12:11:28.452471 2898 log.go:172] (0xc0008302c0) Data frame received for 3\nI0318 12:11:28.452487 2898 log.go:172] (0xc0005dec80) (3) Data frame handling\nI0318 12:11:28.452586 2898 log.go:172] (0xc0008302c0) Data frame received for 5\nI0318 12:11:28.452612 2898 log.go:172] (0xc0005dedc0) (5) Data frame handling\nI0318 12:11:28.454969 2898 log.go:172] (0xc0008302c0) Data frame received for 1\nI0318 12:11:28.455034 2898 log.go:172] (0xc000700640) (1) Data frame handling\nI0318 12:11:28.455083 2898 log.go:172] (0xc000700640) (1) Data frame sent\nI0318 12:11:28.455108 2898 log.go:172] (0xc0008302c0) (0xc000700640) Stream removed, broadcasting: 1\nI0318 12:11:28.455129 2898 log.go:172] (0xc0008302c0) Go away received\nI0318 12:11:28.455429 2898 log.go:172] (0xc0008302c0) (0xc000700640) Stream removed, broadcasting: 1\nI0318 12:11:28.455463 2898 log.go:172] (0xc0008302c0) (0xc0005dec80) Stream removed, broadcasting: 3\nI0318 12:11:28.455477 2898 log.go:172] (0xc0008302c0) (0xc0005dedc0) Stream removed, broadcasting: 5\n" Mar 18 12:11:28.459: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 12:11:28.459: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 12:11:28.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 12:11:28.708: INFO: stderr: "I0318 12:11:28.601784 2921 log.go:172] (0xc0007b02c0) (0xc00071e5a0) Create stream\nI0318 12:11:28.601837 2921 log.go:172] (0xc0007b02c0) (0xc00071e5a0) Stream added, broadcasting: 1\nI0318 12:11:28.604069 2921 log.go:172] (0xc0007b02c0) Reply frame received for 1\nI0318 12:11:28.604128 2921 log.go:172] (0xc0007b02c0) (0xc0005fedc0) Create stream\nI0318 12:11:28.604148 2921 log.go:172] (0xc0007b02c0) (0xc0005fedc0) Stream added, broadcasting: 3\nI0318 12:11:28.605853 2921 log.go:172] (0xc0007b02c0) Reply frame received for 3\nI0318 12:11:28.605915 2921 log.go:172] (0xc0007b02c0) (0xc0006a0000) Create stream\nI0318 12:11:28.605937 2921 log.go:172] (0xc0007b02c0) (0xc0006a0000) Stream added, broadcasting: 5\nI0318 12:11:28.606844 2921 log.go:172] (0xc0007b02c0) Reply frame received for 5\nI0318 12:11:28.702328 2921 log.go:172] (0xc0007b02c0) Data frame received for 3\nI0318 12:11:28.702387 2921 log.go:172] (0xc0005fedc0) (3) Data frame handling\nI0318 12:11:28.702428 2921 log.go:172] (0xc0005fedc0) (3) Data frame sent\nI0318 12:11:28.702546 2921 log.go:172] (0xc0007b02c0) Data frame received for 5\nI0318 12:11:28.702588 2921 log.go:172] (0xc0007b02c0) Data frame received for 3\nI0318 12:11:28.702634 2921 log.go:172] (0xc0005fedc0) (3) Data frame handling\nI0318 12:11:28.702671 2921 log.go:172] (0xc0006a0000) (5) Data frame handling\nI0318 12:11:28.704321 2921 log.go:172] (0xc0007b02c0) Data frame received for 1\nI0318 12:11:28.704340 2921 log.go:172] (0xc00071e5a0) (1) Data frame handling\nI0318 12:11:28.704349 2921 log.go:172] (0xc00071e5a0) (1) Data frame sent\nI0318 12:11:28.704361 2921 log.go:172] (0xc0007b02c0) (0xc00071e5a0) Stream removed, broadcasting: 1\nI0318 12:11:28.704459 2921 log.go:172] (0xc0007b02c0) Go away received\nI0318 12:11:28.704553 2921 log.go:172] (0xc0007b02c0) (0xc00071e5a0) Stream removed, broadcasting: 1\nI0318 12:11:28.704572 2921 log.go:172] (0xc0007b02c0) (0xc0005fedc0) Stream removed, broadcasting: 3\nI0318 12:11:28.704587 2921 log.go:172] (0xc0007b02c0) (0xc0006a0000) Stream removed, broadcasting: 5\n" Mar 18 12:11:28.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 12:11:28.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 12:11:28.708: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 12:11:28.712: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 18 12:11:38.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 12:11:38.721: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 18 12:11:38.721: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 18 12:11:38.736: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:38.736: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:56 +0000 UTC }] Mar 18 12:11:38.736: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:38.736: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:38.736: INFO: Mar 18 12:11:38.736: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 12:11:39.876: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:39.877: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:56 +0000 UTC }] Mar 18 12:11:39.877: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:39.877: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:39.877: INFO: Mar 18 12:11:39.877: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 12:11:40.882: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:40.882: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:10:56 +0000 UTC }] Mar 18 12:11:40.882: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:40.882: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:40.882: INFO: Mar 18 12:11:40.882: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 12:11:41.887: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:41.887: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:41.887: INFO: Mar 18 12:11:41.887: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 12:11:42.912: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:42.912: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:42.912: INFO: Mar 18 12:11:42.912: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 12:11:43.916: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:43.916: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:43.917: INFO: Mar 18 12:11:43.917: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 12:11:44.921: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:44.921: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:44.921: INFO: Mar 18 12:11:44.921: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 12:11:45.926: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:45.926: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:45.926: INFO: Mar 18 12:11:45.926: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 12:11:46.930: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:46.930: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:46.930: INFO: Mar 18 12:11:46.930: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 18 12:11:47.935: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 12:11:47.935: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:11:17 +0000 UTC }] Mar 18 12:11:47.935: INFO: Mar 18 12:11:47.935: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-ntq2l Mar 18 12:11:48.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:11:49.071: INFO: rc: 1 Mar 18 12:11:49.071: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002089e30 exit status 1 true [0xc001c14488 0xc001c144a0 0xc001c144b8] [0xc001c14488 0xc001c144a0 0xc001c144b8] [0xc001c14498 0xc001c144b0] [0x935700 0x935700] 0xc001a4ea80 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 18 12:11:59.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:11:59.155: INFO: rc: 1 Mar 18 12:11:59.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00150db60 exit status 1 true [0xc0015bf9e8 0xc0015bfa50 0xc0015bfa90] [0xc0015bf9e8 0xc0015bfa50 0xc0015bfa90] [0xc0015bfa18 0xc0015bfa80] [0x935700 0x935700] 0xc00177acc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:12:09.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:12:09.244: INFO: rc: 1 Mar 18 12:12:09.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00150dc80 exit status 1 true [0xc0015bfab0 0xc0015bfb00 0xc0015bfb40] [0xc0015bfab0 0xc0015bfb00 0xc0015bfb40] [0xc0015bfaf0 0xc0015bfb28] [0x935700 0x935700] 0xc00177b3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:12:19.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:12:19.332: INFO: rc: 1 Mar 18 12:12:19.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002238000 exit status 1 true [0xc00000ff38 0xc00000ff60 0xc00000ff78] [0xc00000ff38 0xc00000ff60 0xc00000ff78] [0xc00000ff50 0xc00000ff70] [0x935700 0x935700] 0xc000ae7740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:12:29.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:12:29.420: INFO: rc: 1 Mar 18 12:12:29.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003cd8f0 exit status 1 true [0xc0012d0718 0xc0012d0730 0xc0012d0748] [0xc0012d0718 0xc0012d0730 0xc0012d0748] [0xc0012d0728 0xc0012d0740] [0x935700 0x935700] 0xc000cc4060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:12:39.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:12:39.505: INFO: rc: 1 Mar 18 12:12:39.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00150ddd0 exit status 1 true [0xc0015bfb60 0xc0015bfb98 0xc0015bfbf8] [0xc0015bfb60 0xc0015bfb98 0xc0015bfbf8] [0xc0015bfb88 0xc0015bfbc8] [0x935700 0x935700] 0xc00177baa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:12:49.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:12:49.592: INFO: rc: 1 Mar 18 12:12:49.592: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00146a030 exit status 1 true [0xc001c144c0 0xc001c144d8 0xc001c144f0] [0xc001c144c0 0xc001c144d8 0xc001c144f0] [0xc001c144d0 0xc001c144e8] [0x935700 0x935700] 0xc001a4ede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:12:59.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:12:59.685: INFO: rc: 1 Mar 18 12:12:59.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e270 exit status 1 true [0xc00000e150 0xc00000e2f8 0xc00000ebe0] [0xc00000e150 0xc00000e2f8 0xc00000ebe0] [0xc00000e2b8 0xc00000ebd0] [0x935700 0x935700] 0xc002646300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:13:09.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:13:09.778: INFO: rc: 1 Mar 18 12:13:09.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00238c120 exit status 1 true [0xc001512000 0xc001512018 0xc001512030] [0xc001512000 0xc001512018 0xc001512030] [0xc001512010 0xc001512028] [0x935700 0x935700] 0xc002473260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:13:19.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:13:19.865: INFO: rc: 1 Mar 18 12:13:19.865: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00238c240 exit status 1 true [0xc001512040 0xc001512078 0xc0015120d0] [0xc001512040 0xc001512078 0xc0015120d0] [0xc001512060 0xc0015120b8] [0x935700 0x935700] 0xc002473500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:13:29.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:13:29.953: INFO: rc: 1 Mar 18 12:13:29.953: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001842150 exit status 1 true [0xc001846000 0xc001846018 0xc001846030] [0xc001846000 0xc001846018 0xc001846030] [0xc001846010 0xc001846028] [0x935700 0x935700] 0xc002052660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:13:39.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:13:40.046: INFO: rc: 1 Mar 18 12:13:40.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0018422a0 exit status 1 true [0xc001846038 0xc001846050 0xc001846068] [0xc001846038 0xc001846050 0xc001846068] [0xc001846048 0xc001846060] [0x935700 0x935700] 0xc002052c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:13:50.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:13:50.129: INFO: rc: 1 Mar 18 12:13:50.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00238c390 exit status 1 true [0xc0015120d8 0xc0015120f0 0xc001512108] [0xc0015120d8 0xc0015120f0 0xc001512108] [0xc0015120e8 0xc001512100] [0x935700 0x935700] 0xc0024737a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:14:00.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:14:00.228: INFO: rc: 1 Mar 18 12:14:00.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e390 exit status 1 true [0xc00000ebf8 0xc00000ed30 0xc00000ee88] [0xc00000ebf8 0xc00000ed30 0xc00000ee88] [0xc00000ed18 0xc00000ee48] [0x935700 0x935700] 0xc0026465a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:14:10.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:14:10.325: INFO: rc: 1 Mar 18 12:14:10.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e4e0 exit status 1 true [0xc00000eed8 0xc00000efa0 0xc00000efe8] [0xc00000eed8 0xc00000efa0 0xc00000efe8] [0xc00000ef78 0xc00000efe0] [0x935700 0x935700] 0xc0026468a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:14:20.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:14:20.414: INFO: rc: 1 Mar 18 12:14:20.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e600 exit status 1 true [0xc00000f058 0xc00000f140 0xc00000f1d0] [0xc00000f058 0xc00000f140 0xc00000f1d0] [0xc00000f0d8 0xc00000f188] [0x935700 0x935700] 0xc0026472c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:14:30.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:14:30.511: INFO: rc: 1 Mar 18 12:14:30.511: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0018423c0 exit status 1 true [0xc001846070 0xc001846088 0xc0018460a0] [0xc001846070 0xc001846088 0xc0018460a0] [0xc001846080 0xc001846098] [0x935700 0x935700] 0xc002052ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:14:40.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:14:40.626: INFO: rc: 1 Mar 18 12:14:40.626: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00238c510 exit status 1 true [0xc001512110 0xc001512130 0xc001512148] [0xc001512110 0xc001512130 0xc001512148] [0xc001512128 0xc001512140] [0x935700 0x935700] 0xc002473a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:14:50.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:14:50.713: INFO: rc: 1 Mar 18 12:14:50.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001842540 exit status 1 true [0xc0018460a8 0xc0018460c0 0xc0018460d8] [0xc0018460a8 0xc0018460c0 0xc0018460d8] [0xc0018460b8 0xc0018460d0] [0x935700 0x935700] 0xc002053140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:15:00.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:15:00.803: INFO: rc: 1 Mar 18 12:15:00.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e2a0 exit status 1 true [0xc00000e150 0xc00000e2f8 0xc00000ebe0] [0xc00000e150 0xc00000e2f8 0xc00000ebe0] [0xc00000e2b8 0xc00000ebd0] [0x935700 0x935700] 0xc0026462a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:15:10.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:15:10.887: INFO: rc: 1 Mar 18 12:15:10.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0002bac00 exit status 1 true [0xc001846000 0xc001846018 0xc001846030] [0xc001846000 0xc001846018 0xc001846030] [0xc001846010 0xc001846028] [0x935700 0x935700] 0xc0020526c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:15:20.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:15:20.975: INFO: rc: 1 Mar 18 12:15:20.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0002bae40 exit status 1 true [0xc001846038 0xc001846050 0xc001846068] [0xc001846038 0xc001846050 0xc001846068] [0xc001846048 0xc001846060] [0x935700 0x935700] 0xc002052c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:15:30.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:15:31.066: INFO: rc: 1 Mar 18 12:15:31.066: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001842180 exit status 1 true [0xc001512000 0xc001512018 0xc001512030] [0xc001512000 0xc001512018 0xc001512030] [0xc001512010 0xc001512028] [0x935700 0x935700] 0xc002473260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:15:41.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:15:41.161: INFO: rc: 1 Mar 18 12:15:41.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e3f0 exit status 1 true [0xc00000ebf8 0xc00000ed30 0xc00000ee88] [0xc00000ebf8 0xc00000ed30 0xc00000ee88] [0xc00000ed18 0xc00000ee48] [0x935700 0x935700] 0xc002646540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:15:51.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:15:51.249: INFO: rc: 1 Mar 18 12:15:51.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0002bb410 exit status 1 true [0xc001846070 0xc001846088 0xc0018460a0] [0xc001846070 0xc001846088 0xc0018460a0] [0xc001846080 0xc001846098] [0x935700 0x935700] 0xc002052f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:16:01.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:16:01.368: INFO: rc: 1 Mar 18 12:16:01.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e5a0 exit status 1 true [0xc00000eed8 0xc00000efa0 0xc00000efe8] [0xc00000eed8 0xc00000efa0 0xc00000efe8] [0xc00000ef78 0xc00000efe0] [0x935700 0x935700] 0xc002646840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:16:11.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:16:11.506: INFO: rc: 1 Mar 18 12:16:11.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0002bb530 exit status 1 true [0xc0018460a8 0xc0018460c0 0xc0018460d8] [0xc0018460a8 0xc0018460c0 0xc0018460d8] [0xc0018460b8 0xc0018460d0] [0x935700 0x935700] 0xc0020531a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:16:21.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:16:21.593: INFO: rc: 1 Mar 18 12:16:21.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e720 exit status 1 true [0xc00000f058 0xc00000f140 0xc00000f1d0] [0xc00000f058 0xc00000f140 0xc00000f1d0] [0xc00000f0d8 0xc00000f188] [0x935700 0x935700] 0xc002647260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:16:31.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:16:31.686: INFO: rc: 1 Mar 18 12:16:31.687: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001a8e870 exit status 1 true [0xc00000f1f8 0xc00000f240 0xc00000f2b8] [0xc00000f1f8 0xc00000f240 0xc00000f2b8] [0xc00000f220 0xc00000f288] [0x935700 0x935700] 0xc001446fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:16:41.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:16:41.777: INFO: rc: 1 Mar 18 12:16:41.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00238c150 exit status 1 true [0xc001fd0000 0xc001fd0018 0xc001fd0030] [0xc001fd0000 0xc001fd0018 0xc001fd0030] [0xc001fd0010 0xc001fd0028] [0x935700 0x935700] 0xc00149c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 18 12:16:51.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ntq2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 12:16:51.865: INFO: rc: 1 Mar 18 12:16:51.865: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Mar 18 12:16:51.865: INFO: Scaling statefulset ss to 0 Mar 18 12:16:51.873: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 18 12:16:51.875: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ntq2l Mar 18 12:16:51.877: INFO: Scaling statefulset ss to 0 Mar 18 12:16:51.885: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 12:16:51.887: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:16:51.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-ntq2l" for this suite. Mar 18 12:16:57.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:16:57.971: INFO: namespace: e2e-tests-statefulset-ntq2l, resource: bindings, ignored listing per whitelist Mar 18 12:16:57.994: INFO: namespace e2e-tests-statefulset-ntq2l deletion completed in 6.091624141s • [SLOW TEST:361.145 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:16:57.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 12:16:58.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-t54rl' Mar 18 12:17:00.077: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 12:17:00.077: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 18 12:17:02.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-t54rl' Mar 18 12:17:02.220: INFO: stderr: "" Mar 18 12:17:02.220: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:17:02.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t54rl" for this suite. Mar 18 12:18:56.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:18:56.323: INFO: namespace: e2e-tests-kubectl-t54rl, resource: bindings, ignored listing per whitelist Mar 18 12:18:56.325: INFO: namespace e2e-tests-kubectl-t54rl deletion completed in 1m54.100891118s • [SLOW TEST:118.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:18:56.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 18 12:19:00.466: INFO: Pod pod-hostip-a3a7b004-6912-11ea-9856-0242ac11000f has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:19:00.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-m9nvt" for this suite. Mar 18 12:19:22.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:19:22.522: INFO: namespace: e2e-tests-pods-m9nvt, resource: bindings, ignored listing per whitelist Mar 18 12:19:22.567: INFO: namespace e2e-tests-pods-m9nvt deletion completed in 22.099231214s • [SLOW TEST:26.242 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:19:22.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 18 12:19:27.192: INFO: Successfully updated pod "labelsupdateb3481768-6912-11ea-9856-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:19:29.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dkf6v" for this suite. Mar 18 12:19:51.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:19:51.293: INFO: namespace: e2e-tests-projected-dkf6v, resource: bindings, ignored listing per whitelist Mar 18 12:19:51.340: INFO: namespace e2e-tests-projected-dkf6v deletion completed in 22.122171313s • [SLOW TEST:28.773 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:19:51.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 18 12:19:56.062: INFO: Successfully updated pod "labelsupdatec477649f-6912-11ea-9856-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:19:58.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-82lqd" for this suite. Mar 18 12:20:20.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:20:20.156: INFO: namespace: e2e-tests-downward-api-82lqd, resource: bindings, ignored listing per whitelist Mar 18 12:20:20.184: INFO: namespace e2e-tests-downward-api-82lqd deletion completed in 22.086999368s • [SLOW TEST:28.843 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:20:20.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:20:20.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-vbhvj" for this suite. Mar 18 12:20:26.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:20:26.365: INFO: namespace: e2e-tests-services-vbhvj, resource: bindings, ignored listing per whitelist Mar 18 12:20:26.407: INFO: namespace e2e-tests-services-vbhvj deletion completed in 6.118558017s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.222 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:20:26.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 18 12:20:34.566: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:34.588: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:36.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:36.593: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:38.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:38.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:40.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:40.608: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:42.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:42.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:44.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:44.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:46.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:46.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:48.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:48.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:50.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:50.770: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:52.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:52.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:54.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:54.614: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:56.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:56.592: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:20:58.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:20:58.597: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:21:00.589: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:21:00.603: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 12:21:02.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 12:21:02.593: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:21:02.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ljzpp" for this suite. Mar 18 12:21:24.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:21:24.658: INFO: namespace: e2e-tests-container-lifecycle-hook-ljzpp, resource: bindings, ignored listing per whitelist Mar 18 12:21:24.699: INFO: namespace e2e-tests-container-lifecycle-hook-ljzpp deletion completed in 22.102000039s • [SLOW TEST:58.293 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:21:24.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-fc1587d0-6912-11ea-9856-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 18 12:21:24.835: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f" in namespace "e2e-tests-configmap-j8wrm" to be "success or failure" Mar 18 12:21:24.859: INFO: Pod "pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.738294ms Mar 18 12:21:26.862: INFO: Pod "pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027255371s Mar 18 12:21:28.866: INFO: Pod "pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031214126s STEP: Saw pod success Mar 18 12:21:28.866: INFO: Pod "pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:21:28.869: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 18 12:21:28.886: INFO: Waiting for pod pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f to disappear Mar 18 12:21:28.907: INFO: Pod pod-configmaps-fc1cd619-6912-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:21:28.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j8wrm" for this suite. Mar 18 12:21:34.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:21:34.978: INFO: namespace: e2e-tests-configmap-j8wrm, resource: bindings, ignored listing per whitelist Mar 18 12:21:35.016: INFO: namespace e2e-tests-configmap-j8wrm deletion completed in 6.105788742s • [SLOW TEST:10.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:21:35.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 12:21:35.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f" in namespace "e2e-tests-projected-x7d25" to be "success or failure" Mar 18 12:21:35.154: INFO: Pod "downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.234395ms Mar 18 12:21:37.158: INFO: Pod "downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021420156s Mar 18 12:21:39.162: INFO: Pod "downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02552517s STEP: Saw pod success Mar 18 12:21:39.162: INFO: Pod "downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:21:39.165: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 12:21:39.185: INFO: Waiting for pod downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f to disappear Mar 18 12:21:39.231: INFO: Pod downwardapi-volume-0240a6ff-6913-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:21:39.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x7d25" for this suite. Mar 18 12:21:45.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:21:45.315: INFO: namespace: e2e-tests-projected-x7d25, resource: bindings, ignored listing per whitelist Mar 18 12:21:45.362: INFO: namespace e2e-tests-projected-x7d25 deletion completed in 6.128023913s • [SLOW TEST:10.346 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:21:45.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 18 12:21:45.487: INFO: Waiting up to 5m0s for pod "pod-086a795e-6913-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-8qc6l" to be "success or failure" Mar 18 12:21:45.512: INFO: Pod "pod-086a795e-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.477612ms Mar 18 12:21:47.516: INFO: Pod "pod-086a795e-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028930198s Mar 18 12:21:49.520: INFO: Pod "pod-086a795e-6913-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03269745s STEP: Saw pod success Mar 18 12:21:49.520: INFO: Pod "pod-086a795e-6913-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:21:49.523: INFO: Trying to get logs from node hunter-worker2 pod pod-086a795e-6913-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 12:21:49.540: INFO: Waiting for pod pod-086a795e-6913-11ea-9856-0242ac11000f to disappear Mar 18 12:21:49.544: INFO: Pod pod-086a795e-6913-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:21:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8qc6l" for this suite. Mar 18 12:21:55.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:21:55.607: INFO: namespace: e2e-tests-emptydir-8qc6l, resource: bindings, ignored listing per whitelist Mar 18 12:21:55.676: INFO: namespace e2e-tests-emptydir-8qc6l deletion completed in 6.129471078s • [SLOW TEST:10.314 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:21:55.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 12:21:55.744: INFO: Creating deployment "test-recreate-deployment" Mar 18 12:21:55.748: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 18 12:21:55.800: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 18 12:21:57.807: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 18 12:21:57.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130915, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130915, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130915, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720130915, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 12:21:59.813: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 18 12:21:59.821: INFO: Updating deployment test-recreate-deployment Mar 18 12:21:59.821: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 18 12:22:00.023: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-6jjvq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6jjvq/deployments/test-recreate-deployment,UID:0e8a3a61-6913-11ea-99e8-0242ac110002,ResourceVersion:502974,Generation:2,CreationTimestamp:2020-03-18 12:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-18 12:21:59 +0000 UTC 2020-03-18 12:21:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-18 12:22:00 +0000 UTC 2020-03-18 12:21:55 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 18 12:22:00.043: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-6jjvq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6jjvq/replicasets/test-recreate-deployment-589c4bfd,UID:1104e44d-6913-11ea-99e8-0242ac110002,ResourceVersion:502971,Generation:1,CreationTimestamp:2020-03-18 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0e8a3a61-6913-11ea-99e8-0242ac110002 0xc00242a94f 0xc00242a960}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 12:22:00.043: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 18 12:22:00.043: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-6jjvq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6jjvq/replicasets/test-recreate-deployment-5bf7f65dc,UID:0e92a068-6913-11ea-99e8-0242ac110002,ResourceVersion:502962,Generation:2,CreationTimestamp:2020-03-18 12:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0e8a3a61-6913-11ea-99e8-0242ac110002 0xc00242aa90 0xc00242aa91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 12:22:00.047: INFO: Pod "test-recreate-deployment-589c4bfd-4nsck" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-4nsck,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-6jjvq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6jjvq/pods/test-recreate-deployment-589c4bfd-4nsck,UID:1105bdb2-6913-11ea-99e8-0242ac110002,ResourceVersion:502975,Generation:0,CreationTimestamp:2020-03-18 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 1104e44d-6913-11ea-99e8-0242ac110002 0xc001c2612f 0xc001c26140}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-km7r8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-km7r8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-km7r8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c261e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c26200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:21:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:21:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-18 12:21:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:22:00.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6jjvq" for this suite. Mar 18 12:22:06.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:22:06.202: INFO: namespace: e2e-tests-deployment-6jjvq, resource: bindings, ignored listing per whitelist Mar 18 12:22:06.243: INFO: namespace e2e-tests-deployment-6jjvq deletion completed in 6.192422988s • [SLOW TEST:10.566 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:22:06.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 18 12:22:06.368: INFO: Waiting up to 5m0s for pod "pod-14d92be9-6913-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-bxrl4" to be "success or failure" Mar 18 12:22:06.384: INFO: Pod "pod-14d92be9-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.723485ms Mar 18 12:22:08.388: INFO: Pod "pod-14d92be9-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01944148s Mar 18 12:22:10.391: INFO: Pod "pod-14d92be9-6913-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023267992s STEP: Saw pod success Mar 18 12:22:10.391: INFO: Pod "pod-14d92be9-6913-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:22:10.395: INFO: Trying to get logs from node hunter-worker pod pod-14d92be9-6913-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 12:22:10.415: INFO: Waiting for pod pod-14d92be9-6913-11ea-9856-0242ac11000f to disappear Mar 18 12:22:10.420: INFO: Pod pod-14d92be9-6913-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:22:10.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bxrl4" for this suite. Mar 18 12:22:16.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:22:16.444: INFO: namespace: e2e-tests-emptydir-bxrl4, resource: bindings, ignored listing per whitelist Mar 18 12:22:16.513: INFO: namespace e2e-tests-emptydir-bxrl4 deletion completed in 6.089106294s • [SLOW TEST:10.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:22:16.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-k5m6 STEP: Creating a pod to test atomic-volume-subpath Mar 18 12:22:16.628: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-k5m6" in namespace "e2e-tests-subpath-x22ng" to be "success or failure" Mar 18 12:22:16.634: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082513ms Mar 18 12:22:18.638: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010043446s Mar 18 12:22:20.642: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013535278s Mar 18 12:22:22.646: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 6.01788141s Mar 18 12:22:24.650: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 8.022177822s Mar 18 12:22:26.655: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 10.026460244s Mar 18 12:22:28.658: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 12.030268385s Mar 18 12:22:30.663: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 14.034773217s Mar 18 12:22:32.667: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 16.038795026s Mar 18 12:22:34.672: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 18.043446362s Mar 18 12:22:36.676: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 20.047712272s Mar 18 12:22:38.680: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 22.05214204s Mar 18 12:22:40.684: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Running", Reason="", readiness=false. Elapsed: 24.056194318s Mar 18 12:22:42.688: INFO: Pod "pod-subpath-test-projected-k5m6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.060323081s STEP: Saw pod success Mar 18 12:22:42.689: INFO: Pod "pod-subpath-test-projected-k5m6" satisfied condition "success or failure" Mar 18 12:22:42.692: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-k5m6 container test-container-subpath-projected-k5m6: STEP: delete the pod Mar 18 12:22:42.709: INFO: Waiting for pod pod-subpath-test-projected-k5m6 to disappear Mar 18 12:22:42.727: INFO: Pod pod-subpath-test-projected-k5m6 no longer exists STEP: Deleting pod pod-subpath-test-projected-k5m6 Mar 18 12:22:42.727: INFO: Deleting pod "pod-subpath-test-projected-k5m6" in namespace "e2e-tests-subpath-x22ng" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:22:42.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x22ng" for this suite. Mar 18 12:22:48.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:22:48.834: INFO: namespace: e2e-tests-subpath-x22ng, resource: bindings, ignored listing per whitelist Mar 18 12:22:48.839: INFO: namespace e2e-tests-subpath-x22ng deletion completed in 6.106561972s • [SLOW TEST:32.325 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:22:48.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 18 12:22:48.970: INFO: Waiting up to 5m0s for pod "pod-2e42bd31-6913-11ea-9856-0242ac11000f" in namespace "e2e-tests-emptydir-dht4h" to be "success or failure" Mar 18 12:22:48.977: INFO: Pod "pod-2e42bd31-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.26268ms Mar 18 12:22:50.981: INFO: Pod "pod-2e42bd31-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011274315s Mar 18 12:22:52.986: INFO: Pod "pod-2e42bd31-6913-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015572322s STEP: Saw pod success Mar 18 12:22:52.986: INFO: Pod "pod-2e42bd31-6913-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:22:52.989: INFO: Trying to get logs from node hunter-worker pod pod-2e42bd31-6913-11ea-9856-0242ac11000f container test-container: STEP: delete the pod Mar 18 12:22:53.019: INFO: Waiting for pod pod-2e42bd31-6913-11ea-9856-0242ac11000f to disappear Mar 18 12:22:53.023: INFO: Pod pod-2e42bd31-6913-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:22:53.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dht4h" for this suite. Mar 18 12:22:59.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:22:59.064: INFO: namespace: e2e-tests-emptydir-dht4h, resource: bindings, ignored listing per whitelist Mar 18 12:22:59.119: INFO: namespace e2e-tests-emptydir-dht4h deletion completed in 6.092574267s • [SLOW TEST:10.280 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:22:59.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 18 12:22:59.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:22:59.467: INFO: stderr: "" Mar 18 12:22:59.467: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 12:22:59.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:22:59.589: INFO: stderr: "" Mar 18 12:22:59.589: INFO: stdout: "update-demo-nautilus-856jn update-demo-nautilus-ddj8r " Mar 18 12:22:59.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-856jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:22:59.689: INFO: stderr: "" Mar 18 12:22:59.689: INFO: stdout: "" Mar 18 12:22:59.689: INFO: update-demo-nautilus-856jn is created but not running Mar 18 12:23:04.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:04.790: INFO: stderr: "" Mar 18 12:23:04.790: INFO: stdout: "update-demo-nautilus-856jn update-demo-nautilus-ddj8r " Mar 18 12:23:04.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-856jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:04.879: INFO: stderr: "" Mar 18 12:23:04.879: INFO: stdout: "true" Mar 18 12:23:04.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-856jn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:04.977: INFO: stderr: "" Mar 18 12:23:04.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 12:23:04.977: INFO: validating pod update-demo-nautilus-856jn Mar 18 12:23:04.981: INFO: got data: { "image": "nautilus.jpg" } Mar 18 12:23:04.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 12:23:04.981: INFO: update-demo-nautilus-856jn is verified up and running Mar 18 12:23:04.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddj8r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:05.086: INFO: stderr: "" Mar 18 12:23:05.086: INFO: stdout: "true" Mar 18 12:23:05.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddj8r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:05.183: INFO: stderr: "" Mar 18 12:23:05.183: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 12:23:05.183: INFO: validating pod update-demo-nautilus-ddj8r Mar 18 12:23:05.188: INFO: got data: { "image": "nautilus.jpg" } Mar 18 12:23:05.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 12:23:05.188: INFO: update-demo-nautilus-ddj8r is verified up and running STEP: using delete to clean up resources Mar 18 12:23:05.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:05.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 12:23:05.291: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 18 12:23:05.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-9lzpr' Mar 18 12:23:05.394: INFO: stderr: "No resources found.\n" Mar 18 12:23:05.394: INFO: stdout: "" Mar 18 12:23:05.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-9lzpr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 12:23:05.500: INFO: stderr: "" Mar 18 12:23:05.500: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:23:05.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9lzpr" for this suite. Mar 18 12:23:27.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:23:27.575: INFO: namespace: e2e-tests-kubectl-9lzpr, resource: bindings, ignored listing per whitelist Mar 18 12:23:27.621: INFO: namespace e2e-tests-kubectl-9lzpr deletion completed in 22.116777509s • [SLOW TEST:28.501 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:23:27.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xbwxk.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbwxk.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xbwxk.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xbwxk.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbwxk.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xbwxk.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 12:23:33.849: INFO: DNS probes using e2e-tests-dns-xbwxk/dns-test-455ed821-6913-11ea-9856-0242ac11000f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:23:33.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-xbwxk" for this suite. Mar 18 12:23:39.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:23:40.061: INFO: namespace: e2e-tests-dns-xbwxk, resource: bindings, ignored listing per whitelist Mar 18 12:23:40.070: INFO: namespace e2e-tests-dns-xbwxk deletion completed in 6.182275019s • [SLOW TEST:12.449 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:23:40.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 18 12:23:40.251: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 18 12:23:40.262: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:40.264: INFO: Number of nodes with available pods: 0 Mar 18 12:23:40.264: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:23:41.268: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:41.270: INFO: Number of nodes with available pods: 0 Mar 18 12:23:41.270: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:23:42.299: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:42.301: INFO: Number of nodes with available pods: 0 Mar 18 12:23:42.301: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:23:43.269: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:43.272: INFO: Number of nodes with available pods: 0 Mar 18 12:23:43.272: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:23:44.268: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:44.271: INFO: Number of nodes with available pods: 2 Mar 18 12:23:44.271: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 18 12:23:44.297: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:44.297: INFO: Wrong image for pod: daemon-set-vftf8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:44.313: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:45.317: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:45.317: INFO: Wrong image for pod: daemon-set-vftf8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:45.320: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:46.347: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:46.348: INFO: Wrong image for pod: daemon-set-vftf8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:46.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:47.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:47.318: INFO: Wrong image for pod: daemon-set-vftf8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:47.318: INFO: Pod daemon-set-vftf8 is not available Mar 18 12:23:47.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:48.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:48.318: INFO: Pod daemon-set-w4snb is not available Mar 18 12:23:48.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:49.399: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:49.399: INFO: Pod daemon-set-w4snb is not available Mar 18 12:23:49.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:50.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:50.318: INFO: Pod daemon-set-w4snb is not available Mar 18 12:23:50.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:51.317: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:51.325: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:52.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:52.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:52.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:53.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:53.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:53.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:54.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:54.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:54.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:55.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:55.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:55.323: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:56.332: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:56.332: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:56.336: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:57.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:57.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:57.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:58.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:58.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:58.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:23:59.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:23:59.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:23:59.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:24:00.318: INFO: Wrong image for pod: daemon-set-h29qk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 12:24:00.318: INFO: Pod daemon-set-h29qk is not available Mar 18 12:24:00.322: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:24:01.377: INFO: Pod daemon-set-xjsbm is not available Mar 18 12:24:01.386: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 18 12:24:01.406: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:24:01.423: INFO: Number of nodes with available pods: 1 Mar 18 12:24:01.423: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:24:02.429: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:24:02.432: INFO: Number of nodes with available pods: 1 Mar 18 12:24:02.432: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:24:03.427: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:24:03.429: INFO: Number of nodes with available pods: 1 Mar 18 12:24:03.430: INFO: Node hunter-worker is running more than one daemon pod Mar 18 12:24:04.428: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 12:24:04.435: INFO: Number of nodes with available pods: 2 Mar 18 12:24:04.435: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b26rq, will wait for the garbage collector to delete the pods Mar 18 12:24:04.528: INFO: Deleting DaemonSet.extensions daemon-set took: 6.259826ms Mar 18 12:24:04.629: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.2893ms Mar 18 12:24:11.832: INFO: Number of nodes with available pods: 0 Mar 18 12:24:11.832: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 12:24:11.834: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b26rq/daemonsets","resourceVersion":"503505"},"items":null} Mar 18 12:24:11.836: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b26rq/pods","resourceVersion":"503505"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:24:11.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-b26rq" for this suite. Mar 18 12:24:17.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:24:17.899: INFO: namespace: e2e-tests-daemonsets-b26rq, resource: bindings, ignored listing per whitelist Mar 18 12:24:17.946: INFO: namespace e2e-tests-daemonsets-b26rq deletion completed in 6.097703186s • [SLOW TEST:37.876 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:24:17.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-dr9zj STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-dr9zj STEP: Deleting pre-stop pod Mar 18 12:24:31.106: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:24:31.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-dr9zj" for this suite. Mar 18 12:25:09.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:25:09.210: INFO: namespace: e2e-tests-prestop-dr9zj, resource: bindings, ignored listing per whitelist Mar 18 12:25:09.210: INFO: namespace e2e-tests-prestop-dr9zj deletion completed in 38.093356739s • [SLOW TEST:51.264 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:25:09.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 18 12:25:13.879: INFO: Successfully updated pod "pod-update-81ef1c32-6913-11ea-9856-0242ac11000f" STEP: verifying the updated pod is in kubernetes Mar 18 12:25:13.886: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:25:13.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-w4dx4" for this suite. Mar 18 12:25:35.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:25:35.914: INFO: namespace: e2e-tests-pods-w4dx4, resource: bindings, ignored listing per whitelist Mar 18 12:25:36.025: INFO: namespace e2e-tests-pods-w4dx4 deletion completed in 22.135947359s • [SLOW TEST:26.815 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:25:36.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-91e6d2fa-6913-11ea-9856-0242ac11000f STEP: Creating secret with name s-test-opt-upd-91e6d370-6913-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-91e6d2fa-6913-11ea-9856-0242ac11000f STEP: Updating secret s-test-opt-upd-91e6d370-6913-11ea-9856-0242ac11000f STEP: Creating secret with name s-test-opt-create-91e6d399-6913-11ea-9856-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:26:46.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rd2sz" for this suite. Mar 18 12:27:08.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:27:08.594: INFO: namespace: e2e-tests-projected-rd2sz, resource: bindings, ignored listing per whitelist Mar 18 12:27:08.648: INFO: namespace e2e-tests-projected-rd2sz deletion completed in 22.091257688s • [SLOW TEST:92.622 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:27:08.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 18 12:27:08.782: INFO: Waiting up to 5m0s for pod "var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f" in namespace "e2e-tests-var-expansion-5jvnc" to be "success or failure" Mar 18 12:27:08.807: INFO: Pod "var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.821008ms Mar 18 12:27:10.811: INFO: Pod "var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029790204s Mar 18 12:27:12.815: INFO: Pod "var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033319774s STEP: Saw pod success Mar 18 12:27:12.815: INFO: Pod "var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:27:12.817: INFO: Trying to get logs from node hunter-worker pod var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f container dapi-container: STEP: delete the pod Mar 18 12:27:12.838: INFO: Waiting for pod var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f to disappear Mar 18 12:27:12.843: INFO: Pod var-expansion-c91eda6f-6913-11ea-9856-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:27:12.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-5jvnc" for this suite. Mar 18 12:27:18.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:27:18.909: INFO: namespace: e2e-tests-var-expansion-5jvnc, resource: bindings, ignored listing per whitelist Mar 18 12:27:18.948: INFO: namespace e2e-tests-var-expansion-5jvnc deletion completed in 6.083523485s • [SLOW TEST:10.300 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:27:18.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 18 12:27:19.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 18 12:27:19.196: INFO: stderr: "" Mar 18 12:27:19.196: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:27:19.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gd88m" for this suite. Mar 18 12:27:25.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:27:25.262: INFO: namespace: e2e-tests-kubectl-gd88m, resource: bindings, ignored listing per whitelist Mar 18 12:27:25.321: INFO: namespace e2e-tests-kubectl-gd88m deletion completed in 6.120344536s • [SLOW TEST:6.373 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:27:25.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 18 12:27:29.450: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d3096ba9-6913-11ea-9856-0242ac11000f,GenerateName:,Namespace:e2e-tests-events-4hw8t,SelfLink:/api/v1/namespaces/e2e-tests-events-4hw8t/pods/send-events-d3096ba9-6913-11ea-9856-0242ac11000f,UID:d30acae8-6913-11ea-99e8-0242ac110002,ResourceVersion:504068,Generation:0,CreationTimestamp:2020-03-18 12:27:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 412660292,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vqrvn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vqrvn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vqrvn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00299b410} {node.kubernetes.io/unreachable Exists NoExecute 0xc00299b430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:27:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:27:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:27:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:27:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.72,StartTime:2020-03-18 12:27:25 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-18 12:27:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://b67103bd27bc9903034d1a3d199176cb17410e3aef92e2a8c99c0ff97b38b0b1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 18 12:27:31.455: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 18 12:27:33.461: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:27:33.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-4hw8t" for this suite. Mar 18 12:28:13.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:28:13.508: INFO: namespace: e2e-tests-events-4hw8t, resource: bindings, ignored listing per whitelist Mar 18 12:28:13.558: INFO: namespace e2e-tests-events-4hw8t deletion completed in 40.088023338s • [SLOW TEST:48.237 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:28:13.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 18 12:28:13.668: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:28:21.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cns6c" for this suite. Mar 18 12:28:27.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:28:27.377: INFO: namespace: e2e-tests-init-container-cns6c, resource: bindings, ignored listing per whitelist Mar 18 12:28:27.391: INFO: namespace e2e-tests-init-container-cns6c deletion completed in 6.088486789s • [SLOW TEST:13.833 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:28:27.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 18 12:28:27.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f" in namespace "e2e-tests-downward-api-plnb7" to be "success or failure" Mar 18 12:28:27.491: INFO: Pod "downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83803ms Mar 18 12:28:29.496: INFO: Pod "downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008937984s Mar 18 12:28:31.500: INFO: Pod "downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012996764s STEP: Saw pod success Mar 18 12:28:31.500: INFO: Pod "downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f" satisfied condition "success or failure" Mar 18 12:28:31.503: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f container client-container: STEP: delete the pod Mar 18 12:28:31.523: INFO: Waiting for pod downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f to disappear Mar 18 12:28:31.557: INFO: Pod downwardapi-volume-f806f150-6913-11ea-9856-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:28:31.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-plnb7" for this suite. Mar 18 12:28:37.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:28:37.625: INFO: namespace: e2e-tests-downward-api-plnb7, resource: bindings, ignored listing per whitelist Mar 18 12:28:37.659: INFO: namespace e2e-tests-downward-api-plnb7 deletion completed in 6.097268793s • [SLOW TEST:10.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:28:37.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-qzrg STEP: Creating a pod to test atomic-volume-subpath Mar 18 12:28:37.863: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qzrg" in namespace "e2e-tests-subpath-qsvhl" to be "success or failure" Mar 18 12:28:37.866: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.415185ms Mar 18 12:28:39.871: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008622794s Mar 18 12:28:41.907: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044380119s Mar 18 12:28:43.911: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 6.047964404s Mar 18 12:28:45.915: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 8.052162092s Mar 18 12:28:47.919: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 10.056242018s Mar 18 12:28:49.922: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 12.05966239s Mar 18 12:28:51.926: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 14.06361228s Mar 18 12:28:53.930: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 16.067452993s Mar 18 12:28:55.943: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 18.080348288s Mar 18 12:28:57.947: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 20.084264453s Mar 18 12:28:59.951: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 22.088206632s Mar 18 12:29:01.961: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Running", Reason="", readiness=false. Elapsed: 24.09863099s Mar 18 12:29:03.967: INFO: Pod "pod-subpath-test-configmap-qzrg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.104342305s STEP: Saw pod success Mar 18 12:29:03.967: INFO: Pod "pod-subpath-test-configmap-qzrg" satisfied condition "success or failure" Mar 18 12:29:03.970: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-qzrg container test-container-subpath-configmap-qzrg: STEP: delete the pod Mar 18 12:29:04.003: INFO: Waiting for pod pod-subpath-test-configmap-qzrg to disappear Mar 18 12:29:04.013: INFO: Pod pod-subpath-test-configmap-qzrg no longer exists STEP: Deleting pod pod-subpath-test-configmap-qzrg Mar 18 12:29:04.013: INFO: Deleting pod "pod-subpath-test-configmap-qzrg" in namespace "e2e-tests-subpath-qsvhl" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:29:04.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qsvhl" for this suite. Mar 18 12:29:10.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:29:10.088: INFO: namespace: e2e-tests-subpath-qsvhl, resource: bindings, ignored listing per whitelist Mar 18 12:29:10.112: INFO: namespace e2e-tests-subpath-qsvhl deletion completed in 6.093957917s • [SLOW TEST:32.453 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:29:10.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-11806334-6914-11ea-9856-0242ac11000f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-11806334-6914-11ea-9856-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:29:18.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9zg9b" for this suite. Mar 18 12:29:40.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:29:40.354: INFO: namespace: e2e-tests-configmap-9zg9b, resource: bindings, ignored listing per whitelist Mar 18 12:29:40.388: INFO: namespace e2e-tests-configmap-9zg9b deletion completed in 22.116342827s • [SLOW TEST:30.275 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 18 12:29:40.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 18 12:29:50.546: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:50.546: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:50.579596 6 log.go:172] (0xc002268420) (0xc002a500a0) Create stream I0318 12:29:50.579620 6 log.go:172] (0xc002268420) (0xc002a500a0) Stream added, broadcasting: 1 I0318 12:29:50.581994 6 log.go:172] (0xc002268420) Reply frame received for 1 I0318 12:29:50.582035 6 log.go:172] (0xc002268420) (0xc00262fa40) Create stream I0318 12:29:50.582051 6 log.go:172] (0xc002268420) (0xc00262fa40) Stream added, broadcasting: 3 I0318 12:29:50.583341 6 log.go:172] (0xc002268420) Reply frame received for 3 I0318 12:29:50.583394 6 log.go:172] (0xc002268420) (0xc00279b540) Create stream I0318 12:29:50.583410 6 log.go:172] (0xc002268420) (0xc00279b540) Stream added, broadcasting: 5 I0318 12:29:50.584480 6 log.go:172] (0xc002268420) Reply frame received for 5 I0318 12:29:50.670620 6 log.go:172] (0xc002268420) Data frame received for 5 I0318 12:29:50.670653 6 log.go:172] (0xc00279b540) (5) Data frame handling I0318 12:29:50.670688 6 log.go:172] (0xc002268420) Data frame received for 3 I0318 12:29:50.670723 6 log.go:172] (0xc00262fa40) (3) Data frame handling I0318 12:29:50.670752 6 log.go:172] (0xc00262fa40) (3) Data frame sent I0318 12:29:50.670776 6 log.go:172] (0xc002268420) Data frame received for 3 I0318 12:29:50.670789 6 log.go:172] (0xc00262fa40) (3) Data frame handling I0318 12:29:50.672483 6 log.go:172] (0xc002268420) Data frame received for 1 I0318 12:29:50.672516 6 log.go:172] (0xc002a500a0) (1) Data frame handling I0318 12:29:50.672530 6 log.go:172] (0xc002a500a0) (1) Data frame sent I0318 12:29:50.672551 6 log.go:172] (0xc002268420) (0xc002a500a0) Stream removed, broadcasting: 1 I0318 12:29:50.672572 6 log.go:172] (0xc002268420) Go away received I0318 12:29:50.672651 6 log.go:172] (0xc002268420) (0xc002a500a0) Stream removed, broadcasting: 1 I0318 12:29:50.672668 6 log.go:172] (0xc002268420) (0xc00262fa40) Stream removed, broadcasting: 3 I0318 12:29:50.672678 6 log.go:172] (0xc002268420) (0xc00279b540) Stream removed, broadcasting: 5 Mar 18 12:29:50.672: INFO: Exec stderr: "" Mar 18 12:29:50.672: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:50.672: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:50.701903 6 log.go:172] (0xc00079f3f0) (0xc00279b7c0) Create stream I0318 12:29:50.701928 6 log.go:172] (0xc00079f3f0) (0xc00279b7c0) Stream added, broadcasting: 1 I0318 12:29:50.705361 6 log.go:172] (0xc00079f3f0) Reply frame received for 1 I0318 12:29:50.705426 6 log.go:172] (0xc00079f3f0) (0xc00279b860) Create stream I0318 12:29:50.705443 6 log.go:172] (0xc00079f3f0) (0xc00279b860) Stream added, broadcasting: 3 I0318 12:29:50.706567 6 log.go:172] (0xc00079f3f0) Reply frame received for 3 I0318 12:29:50.706601 6 log.go:172] (0xc00079f3f0) (0xc00279b900) Create stream I0318 12:29:50.706610 6 log.go:172] (0xc00079f3f0) (0xc00279b900) Stream added, broadcasting: 5 I0318 12:29:50.707537 6 log.go:172] (0xc00079f3f0) Reply frame received for 5 I0318 12:29:50.764675 6 log.go:172] (0xc00079f3f0) Data frame received for 5 I0318 12:29:50.764713 6 log.go:172] (0xc00279b900) (5) Data frame handling I0318 12:29:50.764742 6 log.go:172] (0xc00079f3f0) Data frame received for 3 I0318 12:29:50.764752 6 log.go:172] (0xc00279b860) (3) Data frame handling I0318 12:29:50.764760 6 log.go:172] (0xc00279b860) (3) Data frame sent I0318 12:29:50.764846 6 log.go:172] (0xc00079f3f0) Data frame received for 3 I0318 12:29:50.764856 6 log.go:172] (0xc00279b860) (3) Data frame handling I0318 12:29:50.766725 6 log.go:172] (0xc00079f3f0) Data frame received for 1 I0318 12:29:50.766757 6 log.go:172] (0xc00279b7c0) (1) Data frame handling I0318 12:29:50.766789 6 log.go:172] (0xc00279b7c0) (1) Data frame sent I0318 12:29:50.766838 6 log.go:172] (0xc00079f3f0) (0xc00279b7c0) Stream removed, broadcasting: 1 I0318 12:29:50.766962 6 log.go:172] (0xc00079f3f0) Go away received I0318 12:29:50.766998 6 log.go:172] (0xc00079f3f0) (0xc00279b7c0) Stream removed, broadcasting: 1 I0318 12:29:50.767057 6 log.go:172] (0xc00079f3f0) (0xc00279b860) Stream removed, broadcasting: 3 I0318 12:29:50.767099 6 log.go:172] (0xc00079f3f0) (0xc00279b900) Stream removed, broadcasting: 5 Mar 18 12:29:50.767: INFO: Exec stderr: "" Mar 18 12:29:50.767: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:50.767: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:50.803084 6 log.go:172] (0xc0026d02c0) (0xc00262fcc0) Create stream I0318 12:29:50.803119 6 log.go:172] (0xc0026d02c0) (0xc00262fcc0) Stream added, broadcasting: 1 I0318 12:29:50.811170 6 log.go:172] (0xc0026d02c0) Reply frame received for 1 I0318 12:29:50.811218 6 log.go:172] (0xc0026d02c0) (0xc000ee20a0) Create stream I0318 12:29:50.811232 6 log.go:172] (0xc0026d02c0) (0xc000ee20a0) Stream added, broadcasting: 3 I0318 12:29:50.812008 6 log.go:172] (0xc0026d02c0) Reply frame received for 3 I0318 12:29:50.812040 6 log.go:172] (0xc0026d02c0) (0xc000ee2140) Create stream I0318 12:29:50.812053 6 log.go:172] (0xc0026d02c0) (0xc000ee2140) Stream added, broadcasting: 5 I0318 12:29:50.812783 6 log.go:172] (0xc0026d02c0) Reply frame received for 5 I0318 12:29:50.874989 6 log.go:172] (0xc0026d02c0) Data frame received for 3 I0318 12:29:50.875023 6 log.go:172] (0xc000ee20a0) (3) Data frame handling I0318 12:29:50.875042 6 log.go:172] (0xc000ee20a0) (3) Data frame sent I0318 12:29:50.875062 6 log.go:172] (0xc0026d02c0) Data frame received for 3 I0318 12:29:50.875074 6 log.go:172] (0xc000ee20a0) (3) Data frame handling I0318 12:29:50.875131 6 log.go:172] (0xc0026d02c0) Data frame received for 5 I0318 12:29:50.875156 6 log.go:172] (0xc000ee2140) (5) Data frame handling I0318 12:29:50.877728 6 log.go:172] (0xc0026d02c0) Data frame received for 1 I0318 12:29:50.877749 6 log.go:172] (0xc00262fcc0) (1) Data frame handling I0318 12:29:50.877762 6 log.go:172] (0xc00262fcc0) (1) Data frame sent I0318 12:29:50.877773 6 log.go:172] (0xc0026d02c0) (0xc00262fcc0) Stream removed, broadcasting: 1 I0318 12:29:50.877834 6 log.go:172] (0xc0026d02c0) Go away received I0318 12:29:50.877882 6 log.go:172] (0xc0026d02c0) (0xc00262fcc0) Stream removed, broadcasting: 1 I0318 12:29:50.877921 6 log.go:172] (0xc0026d02c0) (0xc000ee20a0) Stream removed, broadcasting: 3 I0318 12:29:50.877940 6 log.go:172] (0xc0026d02c0) (0xc000ee2140) Stream removed, broadcasting: 5 Mar 18 12:29:50.877: INFO: Exec stderr: "" Mar 18 12:29:50.877: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:50.878: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:50.910588 6 log.go:172] (0xc0022682c0) (0xc0013ea1e0) Create stream I0318 12:29:50.910618 6 log.go:172] (0xc0022682c0) (0xc0013ea1e0) Stream added, broadcasting: 1 I0318 12:29:50.912874 6 log.go:172] (0xc0022682c0) Reply frame received for 1 I0318 12:29:50.912934 6 log.go:172] (0xc0022682c0) (0xc0013ea280) Create stream I0318 12:29:50.912943 6 log.go:172] (0xc0022682c0) (0xc0013ea280) Stream added, broadcasting: 3 I0318 12:29:50.914165 6 log.go:172] (0xc0022682c0) Reply frame received for 3 I0318 12:29:50.914202 6 log.go:172] (0xc0022682c0) (0xc0013ea320) Create stream I0318 12:29:50.914216 6 log.go:172] (0xc0022682c0) (0xc0013ea320) Stream added, broadcasting: 5 I0318 12:29:50.915332 6 log.go:172] (0xc0022682c0) Reply frame received for 5 I0318 12:29:50.981529 6 log.go:172] (0xc0022682c0) Data frame received for 3 I0318 12:29:50.981565 6 log.go:172] (0xc0013ea280) (3) Data frame handling I0318 12:29:50.981587 6 log.go:172] (0xc0013ea280) (3) Data frame sent I0318 12:29:50.981600 6 log.go:172] (0xc0022682c0) Data frame received for 3 I0318 12:29:50.981613 6 log.go:172] (0xc0013ea280) (3) Data frame handling I0318 12:29:50.981826 6 log.go:172] (0xc0022682c0) Data frame received for 5 I0318 12:29:50.981850 6 log.go:172] (0xc0013ea320) (5) Data frame handling I0318 12:29:50.983467 6 log.go:172] (0xc0022682c0) Data frame received for 1 I0318 12:29:50.983495 6 log.go:172] (0xc0013ea1e0) (1) Data frame handling I0318 12:29:50.983508 6 log.go:172] (0xc0013ea1e0) (1) Data frame sent I0318 12:29:50.983528 6 log.go:172] (0xc0022682c0) (0xc0013ea1e0) Stream removed, broadcasting: 1 I0318 12:29:50.983549 6 log.go:172] (0xc0022682c0) Go away received I0318 12:29:50.983628 6 log.go:172] (0xc0022682c0) (0xc0013ea1e0) Stream removed, broadcasting: 1 I0318 12:29:50.983648 6 log.go:172] (0xc0022682c0) (0xc0013ea280) Stream removed, broadcasting: 3 I0318 12:29:50.983659 6 log.go:172] (0xc0022682c0) (0xc0013ea320) Stream removed, broadcasting: 5 Mar 18 12:29:50.983: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 18 12:29:50.983: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:50.983: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:51.019077 6 log.go:172] (0xc002268790) (0xc0013ea820) Create stream I0318 12:29:51.019151 6 log.go:172] (0xc002268790) (0xc0013ea820) Stream added, broadcasting: 1 I0318 12:29:51.023447 6 log.go:172] (0xc002268790) Reply frame received for 1 I0318 12:29:51.023487 6 log.go:172] (0xc002268790) (0xc000ee21e0) Create stream I0318 12:29:51.023499 6 log.go:172] (0xc002268790) (0xc000ee21e0) Stream added, broadcasting: 3 I0318 12:29:51.024833 6 log.go:172] (0xc002268790) Reply frame received for 3 I0318 12:29:51.024869 6 log.go:172] (0xc002268790) (0xc0013ea8c0) Create stream I0318 12:29:51.024885 6 log.go:172] (0xc002268790) (0xc0013ea8c0) Stream added, broadcasting: 5 I0318 12:29:51.026006 6 log.go:172] (0xc002268790) Reply frame received for 5 I0318 12:29:51.078628 6 log.go:172] (0xc002268790) Data frame received for 5 I0318 12:29:51.078666 6 log.go:172] (0xc002268790) Data frame received for 3 I0318 12:29:51.078730 6 log.go:172] (0xc000ee21e0) (3) Data frame handling I0318 12:29:51.078765 6 log.go:172] (0xc000ee21e0) (3) Data frame sent I0318 12:29:51.078794 6 log.go:172] (0xc0013ea8c0) (5) Data frame handling I0318 12:29:51.078820 6 log.go:172] (0xc002268790) Data frame received for 3 I0318 12:29:51.078843 6 log.go:172] (0xc000ee21e0) (3) Data frame handling I0318 12:29:51.080229 6 log.go:172] (0xc002268790) Data frame received for 1 I0318 12:29:51.080262 6 log.go:172] (0xc0013ea820) (1) Data frame handling I0318 12:29:51.080309 6 log.go:172] (0xc0013ea820) (1) Data frame sent I0318 12:29:51.080336 6 log.go:172] (0xc002268790) (0xc0013ea820) Stream removed, broadcasting: 1 I0318 12:29:51.080356 6 log.go:172] (0xc002268790) Go away received I0318 12:29:51.080463 6 log.go:172] (0xc002268790) (0xc0013ea820) Stream removed, broadcasting: 1 I0318 12:29:51.080475 6 log.go:172] (0xc002268790) (0xc000ee21e0) Stream removed, broadcasting: 3 I0318 12:29:51.080482 6 log.go:172] (0xc002268790) (0xc0013ea8c0) Stream removed, broadcasting: 5 Mar 18 12:29:51.080: INFO: Exec stderr: "" Mar 18 12:29:51.080: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:51.080: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:51.109369 6 log.go:172] (0xc00079f290) (0xc0024b41e0) Create stream I0318 12:29:51.109397 6 log.go:172] (0xc00079f290) (0xc0024b41e0) Stream added, broadcasting: 1 I0318 12:29:51.111106 6 log.go:172] (0xc00079f290) Reply frame received for 1 I0318 12:29:51.111142 6 log.go:172] (0xc00079f290) (0xc001d240a0) Create stream I0318 12:29:51.111155 6 log.go:172] (0xc00079f290) (0xc001d240a0) Stream added, broadcasting: 3 I0318 12:29:51.112034 6 log.go:172] (0xc00079f290) Reply frame received for 3 I0318 12:29:51.112056 6 log.go:172] (0xc00079f290) (0xc0024b4280) Create stream I0318 12:29:51.112066 6 log.go:172] (0xc00079f290) (0xc0024b4280) Stream added, broadcasting: 5 I0318 12:29:51.112881 6 log.go:172] (0xc00079f290) Reply frame received for 5 I0318 12:29:51.176523 6 log.go:172] (0xc00079f290) Data frame received for 3 I0318 12:29:51.176567 6 log.go:172] (0xc001d240a0) (3) Data frame handling I0318 12:29:51.176591 6 log.go:172] (0xc001d240a0) (3) Data frame sent I0318 12:29:51.176610 6 log.go:172] (0xc00079f290) Data frame received for 3 I0318 12:29:51.176625 6 log.go:172] (0xc001d240a0) (3) Data frame handling I0318 12:29:51.176665 6 log.go:172] (0xc00079f290) Data frame received for 5 I0318 12:29:51.176698 6 log.go:172] (0xc0024b4280) (5) Data frame handling I0318 12:29:51.184106 6 log.go:172] (0xc00079f290) Data frame received for 1 I0318 12:29:51.184131 6 log.go:172] (0xc0024b41e0) (1) Data frame handling I0318 12:29:51.184151 6 log.go:172] (0xc0024b41e0) (1) Data frame sent I0318 12:29:51.184166 6 log.go:172] (0xc00079f290) (0xc0024b41e0) Stream removed, broadcasting: 1 I0318 12:29:51.184191 6 log.go:172] (0xc00079f290) Go away received I0318 12:29:51.184302 6 log.go:172] (0xc00079f290) (0xc0024b41e0) Stream removed, broadcasting: 1 I0318 12:29:51.184334 6 log.go:172] (0xc00079f290) (0xc001d240a0) Stream removed, broadcasting: 3 I0318 12:29:51.184347 6 log.go:172] (0xc00079f290) (0xc0024b4280) Stream removed, broadcasting: 5 Mar 18 12:29:51.184: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 18 12:29:51.184: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:51.184: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:51.220491 6 log.go:172] (0xc0026d0580) (0xc001d248c0) Create stream I0318 12:29:51.220522 6 log.go:172] (0xc0026d0580) (0xc001d248c0) Stream added, broadcasting: 1 I0318 12:29:51.223696 6 log.go:172] (0xc0026d0580) Reply frame received for 1 I0318 12:29:51.223755 6 log.go:172] (0xc0026d0580) (0xc002970000) Create stream I0318 12:29:51.223772 6 log.go:172] (0xc0026d0580) (0xc002970000) Stream added, broadcasting: 3 I0318 12:29:51.224623 6 log.go:172] (0xc0026d0580) Reply frame received for 3 I0318 12:29:51.224669 6 log.go:172] (0xc0026d0580) (0xc0024b4320) Create stream I0318 12:29:51.224684 6 log.go:172] (0xc0026d0580) (0xc0024b4320) Stream added, broadcasting: 5 I0318 12:29:51.225833 6 log.go:172] (0xc0026d0580) Reply frame received for 5 I0318 12:29:51.292288 6 log.go:172] (0xc0026d0580) Data frame received for 5 I0318 12:29:51.292346 6 log.go:172] (0xc0024b4320) (5) Data frame handling I0318 12:29:51.292380 6 log.go:172] (0xc0026d0580) Data frame received for 3 I0318 12:29:51.292394 6 log.go:172] (0xc002970000) (3) Data frame handling I0318 12:29:51.292410 6 log.go:172] (0xc002970000) (3) Data frame sent I0318 12:29:51.292425 6 log.go:172] (0xc0026d0580) Data frame received for 3 I0318 12:29:51.292437 6 log.go:172] (0xc002970000) (3) Data frame handling I0318 12:29:51.294154 6 log.go:172] (0xc0026d0580) Data frame received for 1 I0318 12:29:51.294187 6 log.go:172] (0xc001d248c0) (1) Data frame handling I0318 12:29:51.294202 6 log.go:172] (0xc001d248c0) (1) Data frame sent I0318 12:29:51.294235 6 log.go:172] (0xc0026d0580) (0xc001d248c0) Stream removed, broadcasting: 1 I0318 12:29:51.294263 6 log.go:172] (0xc0026d0580) Go away received I0318 12:29:51.294352 6 log.go:172] (0xc0026d0580) (0xc001d248c0) Stream removed, broadcasting: 1 I0318 12:29:51.294375 6 log.go:172] (0xc0026d0580) (0xc002970000) Stream removed, broadcasting: 3 I0318 12:29:51.294389 6 log.go:172] (0xc0026d0580) (0xc0024b4320) Stream removed, broadcasting: 5 Mar 18 12:29:51.294: INFO: Exec stderr: "" Mar 18 12:29:51.294: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:51.294: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:51.331487 6 log.go:172] (0xc0026d0a50) (0xc001d24be0) Create stream I0318 12:29:51.331525 6 log.go:172] (0xc0026d0a50) (0xc001d24be0) Stream added, broadcasting: 1 I0318 12:29:51.334075 6 log.go:172] (0xc0026d0a50) Reply frame received for 1 I0318 12:29:51.334128 6 log.go:172] (0xc0026d0a50) (0xc000ee2280) Create stream I0318 12:29:51.334144 6 log.go:172] (0xc0026d0a50) (0xc000ee2280) Stream added, broadcasting: 3 I0318 12:29:51.335086 6 log.go:172] (0xc0026d0a50) Reply frame received for 3 I0318 12:29:51.335130 6 log.go:172] (0xc0026d0a50) (0xc0029700a0) Create stream I0318 12:29:51.335151 6 log.go:172] (0xc0026d0a50) (0xc0029700a0) Stream added, broadcasting: 5 I0318 12:29:51.336102 6 log.go:172] (0xc0026d0a50) Reply frame received for 5 I0318 12:29:51.407150 6 log.go:172] (0xc0026d0a50) Data frame received for 3 I0318 12:29:51.407198 6 log.go:172] (0xc000ee2280) (3) Data frame handling I0318 12:29:51.407215 6 log.go:172] (0xc000ee2280) (3) Data frame sent I0318 12:29:51.407229 6 log.go:172] (0xc0026d0a50) Data frame received for 3 I0318 12:29:51.407243 6 log.go:172] (0xc000ee2280) (3) Data frame handling I0318 12:29:51.407269 6 log.go:172] (0xc0026d0a50) Data frame received for 5 I0318 12:29:51.407282 6 log.go:172] (0xc0029700a0) (5) Data frame handling I0318 12:29:51.408991 6 log.go:172] (0xc0026d0a50) Data frame received for 1 I0318 12:29:51.409011 6 log.go:172] (0xc001d24be0) (1) Data frame handling I0318 12:29:51.409031 6 log.go:172] (0xc001d24be0) (1) Data frame sent I0318 12:29:51.409050 6 log.go:172] (0xc0026d0a50) (0xc001d24be0) Stream removed, broadcasting: 1 I0318 12:29:51.409343 6 log.go:172] (0xc0026d0a50) (0xc001d24be0) Stream removed, broadcasting: 1 I0318 12:29:51.409423 6 log.go:172] (0xc0026d0a50) (0xc000ee2280) Stream removed, broadcasting: 3 I0318 12:29:51.409463 6 log.go:172] (0xc0026d0a50) (0xc0029700a0) Stream removed, broadcasting: 5 Mar 18 12:29:51.409: INFO: Exec stderr: "" I0318 12:29:51.409551 6 log.go:172] (0xc0026d0a50) Go away received Mar 18 12:29:51.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:51.409: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:51.443553 6 log.go:172] (0xc00079f760) (0xc0024b45a0) Create stream I0318 12:29:51.443580 6 log.go:172] (0xc00079f760) (0xc0024b45a0) Stream added, broadcasting: 1 I0318 12:29:51.446089 6 log.go:172] (0xc00079f760) Reply frame received for 1 I0318 12:29:51.446146 6 log.go:172] (0xc00079f760) (0xc0024b4640) Create stream I0318 12:29:51.446164 6 log.go:172] (0xc00079f760) (0xc0024b4640) Stream added, broadcasting: 3 I0318 12:29:51.447220 6 log.go:172] (0xc00079f760) Reply frame received for 3 I0318 12:29:51.447251 6 log.go:172] (0xc00079f760) (0xc0024b46e0) Create stream I0318 12:29:51.447262 6 log.go:172] (0xc00079f760) (0xc0024b46e0) Stream added, broadcasting: 5 I0318 12:29:51.448463 6 log.go:172] (0xc00079f760) Reply frame received for 5 I0318 12:29:51.512731 6 log.go:172] (0xc00079f760) Data frame received for 3 I0318 12:29:51.512765 6 log.go:172] (0xc0024b4640) (3) Data frame handling I0318 12:29:51.512780 6 log.go:172] (0xc0024b4640) (3) Data frame sent I0318 12:29:51.512790 6 log.go:172] (0xc00079f760) Data frame received for 3 I0318 12:29:51.512797 6 log.go:172] (0xc0024b4640) (3) Data frame handling I0318 12:29:51.512821 6 log.go:172] (0xc00079f760) Data frame received for 5 I0318 12:29:51.512831 6 log.go:172] (0xc0024b46e0) (5) Data frame handling I0318 12:29:51.514525 6 log.go:172] (0xc00079f760) Data frame received for 1 I0318 12:29:51.514554 6 log.go:172] (0xc0024b45a0) (1) Data frame handling I0318 12:29:51.514579 6 log.go:172] (0xc0024b45a0) (1) Data frame sent I0318 12:29:51.514615 6 log.go:172] (0xc00079f760) (0xc0024b45a0) Stream removed, broadcasting: 1 I0318 12:29:51.514644 6 log.go:172] (0xc00079f760) Go away received I0318 12:29:51.514753 6 log.go:172] (0xc00079f760) (0xc0024b45a0) Stream removed, broadcasting: 1 I0318 12:29:51.514767 6 log.go:172] (0xc00079f760) (0xc0024b4640) Stream removed, broadcasting: 3 I0318 12:29:51.514776 6 log.go:172] (0xc00079f760) (0xc0024b46e0) Stream removed, broadcasting: 5 Mar 18 12:29:51.514: INFO: Exec stderr: "" Mar 18 12:29:51.514: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-gvc4w PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:29:51.514: INFO: >>> kubeConfig: /root/.kube/config I0318 12:29:51.552970 6 log.go:172] (0xc0017d84d0) (0xc0029703c0) Create stream I0318 12:29:51.552993 6 log.go:172] (0xc0017d84d0) (0xc0029703c0) Stream added, broadcasting: 1 I0318 12:29:51.556315 6 log.go:172] (0xc0017d84d0) Reply frame received for 1 I0318 12:29:51.556360 6 log.go:172] (0xc0017d84d0) (0xc0024b4780) Create stream I0318 12:29:51.556376 6 log.go:172] (0xc0017d84d0) (0xc0024b4780) Stream added, broadcasting: 3 I0318 12:29:51.557435 6 log.go:172] (0xc0017d84d0) Reply frame received for 3 I0318 12:29:51.557471 6 log.go:172] (0xc0017d84d0) (0xc0024b4820) Create stream I0318 12:29:51.557485 6 log.go:172] (0xc0017d84d0) (0xc0024b4820) Stream added, broadcasting: 5 I0318 12:29:51.558337 6 log.go:172] (0xc0017d84d0) Reply frame received for 5 I0318 12:29:51.624934 6 log.go:172] (0xc0017d84d0) Data frame received for 3 I0318 12:29:51.624953 6 log.go:172] (0xc0024b4780) (3) Data frame handling I0318 12:29:51.624970 6 log.go:172] (0xc0024b4780) (3) Data frame sent I0318 12:29:51.625352 6 log.go:172] (0xc0017d84d0) Data frame received for 5 I0318 12:29:51.625417 6 log.go:172] (0xc0024b4820) (5) Data frame handling I0318 12:29:51.625464 6 log.go:172] (0xc0017d84d0) Data frame received for 3 I0318 12:29:51.625504 6 log.go:172] (0xc0024b4780) (3) Data frame handling I0318 12:29:51.626744 6 log.go:172] (0xc0017d84d0) Data frame received for 1 I0318 12:29:51.626760 6 log.go:172] (0xc0029703c0) (1) Data frame handling I0318 12:29:51.626768 6 log.go:172] (0xc0029703c0) (1) Data frame sent I0318 12:29:51.626786 6 log.go:172] (0xc0017d84d0) (0xc0029703c0) Stream removed, broadcasting: 1 I0318 12:29:51.626800 6 log.go:172] (0xc0017d84d0) Go away received I0318 12:29:51.627042 6 log.go:172] (0xc0017d84d0) (0xc0029703c0) Stream removed, broadcasting: 1 I0318 12:29:51.627056 6 log.go:172] (0xc0017d84d0) (0xc0024b4780) Stream removed, broadcasting: 3 I0318 12:29:51.627063 6 log.go:172] (0xc0017d84d0) (0xc0024b4820) Stream removed, broadcasting: 5 Mar 18 12:29:51.627: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 18 12:29:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-gvc4w" for this suite. Mar 18 12:30:33.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:30:33.665: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-gvc4w, resource: bindings, ignored listing per whitelist Mar 18 12:30:33.714: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-gvc4w deletion completed in 42.083290222s • [SLOW TEST:53.326 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSMar 18 12:30:33.714: INFO: Running AfterSuite actions on all nodes Mar 18 12:30:33.714: INFO: Running AfterSuite actions on node 1 Mar 18 12:30:33.714: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6229.426 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS