I0701 07:31:40.973409 6 e2e.go:224] Starting e2e run "e79ec815-bb6c-11ea-a133-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593588700 - Will randomize all specs Will run 201 of 2164 specs Jul 1 07:31:41.172: INFO: >>> kubeConfig: /root/.kube/config Jul 1 07:31:41.176: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 1 07:31:41.195: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 1 07:31:41.274: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 1 07:31:41.274: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 1 07:31:41.274: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 1 07:31:41.282: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 1 07:31:41.282: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 1 07:31:41.282: INFO: e2e test version: v1.13.12 Jul 1 07:31:41.283: INFO: kube-apiserver version: v1.13.12 SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:31:41.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Jul 1 07:31:41.423: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 1 07:31:41.432: INFO: Waiting up to 5m0s for pod "pod-e82badda-bb6c-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-jmp8f" to be "success or failure" Jul 1 07:31:41.449: INFO: Pod "pod-e82badda-bb6c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.713285ms Jul 1 07:31:43.454: INFO: Pod "pod-e82badda-bb6c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022303214s Jul 1 07:31:45.459: INFO: Pod "pod-e82badda-bb6c-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027075132s STEP: Saw pod success Jul 1 07:31:45.459: INFO: Pod "pod-e82badda-bb6c-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:31:45.462: INFO: Trying to get logs from node hunter-worker2 pod pod-e82badda-bb6c-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 07:31:45.501: INFO: Waiting for pod pod-e82badda-bb6c-11ea-a133-0242ac110018 to disappear Jul 1 07:31:45.510: INFO: Pod pod-e82badda-bb6c-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:31:45.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jmp8f" for this suite. Jul 1 07:31:51.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:31:51.602: INFO: namespace: e2e-tests-emptydir-jmp8f, resource: bindings, ignored listing per whitelist Jul 1 07:31:51.609: INFO: namespace e2e-tests-emptydir-jmp8f deletion completed in 6.095196225s • [SLOW TEST:10.326 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:31:51.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 07:31:51.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-mpnxs' Jul 1 07:31:54.881: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 07:31:54.881: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jul 1 07:31:58.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-mpnxs' Jul 1 07:31:59.040: INFO: stderr: "" Jul 1 07:31:59.040: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:31:59.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mpnxs" for this suite. Jul 1 07:32:21.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:32:21.216: INFO: namespace: e2e-tests-kubectl-mpnxs, resource: bindings, ignored listing per whitelist Jul 1 07:32:21.244: INFO: namespace e2e-tests-kubectl-mpnxs deletion completed in 22.19960265s • [SLOW TEST:29.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:32:21.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 07:32:25.926: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fffabbea-bb6c-11ea-a133-0242ac110018" Jul 1 07:32:25.926: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fffabbea-bb6c-11ea-a133-0242ac110018" in namespace "e2e-tests-pods-x4nws" to be "terminated due to deadline exceeded" Jul 1 07:32:25.931: INFO: Pod "pod-update-activedeadlineseconds-fffabbea-bb6c-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.331485ms Jul 1 07:32:27.935: INFO: Pod "pod-update-activedeadlineseconds-fffabbea-bb6c-11ea-a133-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.008220461s Jul 1 07:32:27.935: INFO: Pod "pod-update-activedeadlineseconds-fffabbea-bb6c-11ea-a133-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:32:27.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-x4nws" for this suite. Jul 1 07:32:33.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:32:33.968: INFO: namespace: e2e-tests-pods-x4nws, resource: bindings, ignored listing per whitelist Jul 1 07:32:34.024: INFO: namespace e2e-tests-pods-x4nws deletion completed in 6.084584791s • [SLOW TEST:12.780 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:32:34.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 07:32:34.165: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:32:35.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-q6sz8" for this suite. Jul 1 07:32:41.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:32:41.362: INFO: namespace: e2e-tests-custom-resource-definition-q6sz8, resource: bindings, ignored listing per whitelist Jul 1 07:32:41.399: INFO: namespace e2e-tests-custom-resource-definition-q6sz8 deletion completed in 6.153421869s • [SLOW TEST:7.374 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:32:41.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 07:32:41.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-7q2lf" to be "success or failure" Jul 1 07:32:41.585: INFO: Pod "downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 65.603756ms Jul 1 07:32:43.589: INFO: Pod "downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069826675s Jul 1 07:32:45.593: INFO: Pod "downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073209938s STEP: Saw pod success Jul 1 07:32:45.593: INFO: Pod "downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:32:45.595: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 07:32:45.615: INFO: Waiting for pod downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018 to disappear Jul 1 07:32:45.644: INFO: Pod downwardapi-volume-0bf78209-bb6d-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:32:45.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7q2lf" for this suite. Jul 1 07:32:51.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:32:51.737: INFO: namespace: e2e-tests-downward-api-7q2lf, resource: bindings, ignored listing per whitelist Jul 1 07:32:51.739: INFO: namespace e2e-tests-downward-api-7q2lf deletion completed in 6.091352302s • [SLOW TEST:10.340 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:32:51.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-7trgt.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-7trgt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-7trgt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-7trgt.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-7trgt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-7trgt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 07:33:00.018: INFO: DNS probes using e2e-tests-dns-7trgt/dns-test-122b339e-bb6d-11ea-a133-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:33:00.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-7trgt" for this suite. Jul 1 07:33:06.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:33:06.197: INFO: namespace: e2e-tests-dns-7trgt, resource: bindings, ignored listing per whitelist Jul 1 07:33:06.245: INFO: namespace e2e-tests-dns-7trgt deletion completed in 6.082225735s • [SLOW TEST:14.506 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:33:06.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 07:33:06.390: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jul 1 07:33:06.398: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-48q2s/daemonsets","resourceVersion":"18819112"},"items":null} Jul 1 07:33:06.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-48q2s/pods","resourceVersion":"18819112"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:33:06.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-48q2s" for this suite. Jul 1 07:33:12.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:33:12.448: INFO: namespace: e2e-tests-daemonsets-48q2s, resource: bindings, ignored listing per whitelist Jul 1 07:33:12.503: INFO: namespace e2e-tests-daemonsets-48q2s deletion completed in 6.090524287s S [SKIPPING] [6.258 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 07:33:06.390: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:33:12.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-csfv2 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jul 1 07:33:12.668: INFO: Found 0 stateful pods, waiting for 3 Jul 1 07:33:22.673: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:33:22.674: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:33:22.674: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 07:33:32.673: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:33:32.673: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:33:32.673: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:33:32.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-csfv2 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 07:33:32.996: INFO: stderr: "I0701 07:33:32.820192 88 log.go:172] (0xc000138630) (0xc0006c2640) Create stream\nI0701 07:33:32.820262 88 log.go:172] (0xc000138630) (0xc0006c2640) Stream added, broadcasting: 1\nI0701 07:33:32.823473 88 log.go:172] (0xc000138630) Reply frame received for 1\nI0701 07:33:32.823532 88 log.go:172] (0xc000138630) (0xc0006c26e0) Create stream\nI0701 07:33:32.823552 88 log.go:172] (0xc000138630) (0xc0006c26e0) Stream added, broadcasting: 3\nI0701 07:33:32.824283 88 log.go:172] (0xc000138630) Reply frame received for 3\nI0701 07:33:32.824320 88 log.go:172] (0xc000138630) (0xc000368d20) Create stream\nI0701 07:33:32.824346 88 log.go:172] (0xc000138630) (0xc000368d20) Stream added, broadcasting: 5\nI0701 07:33:32.825077 88 log.go:172] (0xc000138630) Reply frame received for 5\nI0701 07:33:32.985915 88 log.go:172] (0xc000138630) Data frame received for 3\nI0701 07:33:32.985947 88 log.go:172] (0xc0006c26e0) (3) Data frame handling\nI0701 07:33:32.986123 88 log.go:172] (0xc0006c26e0) (3) Data frame sent\nI0701 07:33:32.986480 88 log.go:172] (0xc000138630) Data frame received for 5\nI0701 07:33:32.986623 88 log.go:172] (0xc000138630) Data frame received for 3\nI0701 07:33:32.986636 88 log.go:172] (0xc0006c26e0) (3) Data frame handling\nI0701 07:33:32.986685 88 log.go:172] (0xc000368d20) (5) Data frame handling\nI0701 07:33:32.988181 88 log.go:172] (0xc000138630) Data frame received for 1\nI0701 07:33:32.988198 88 log.go:172] (0xc0006c2640) (1) Data frame handling\nI0701 07:33:32.988208 88 log.go:172] (0xc0006c2640) (1) Data frame sent\nI0701 07:33:32.988220 88 log.go:172] (0xc000138630) (0xc0006c2640) Stream removed, broadcasting: 1\nI0701 07:33:32.988236 88 log.go:172] (0xc000138630) Go away received\nI0701 07:33:32.988518 88 log.go:172] (0xc000138630) (0xc0006c2640) Stream removed, broadcasting: 1\nI0701 07:33:32.988544 88 log.go:172] (0xc000138630) (0xc0006c26e0) Stream removed, broadcasting: 3\nI0701 07:33:32.988557 88 log.go:172] (0xc000138630) (0xc000368d20) Stream removed, broadcasting: 5\n" Jul 1 07:33:32.996: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 07:33:32.996: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 1 07:33:43.052: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 1 07:33:53.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-csfv2 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 07:33:53.390: INFO: stderr: "I0701 07:33:53.290604 111 log.go:172] (0xc000778160) (0xc0006a4640) Create stream\nI0701 07:33:53.290665 111 log.go:172] (0xc000778160) (0xc0006a4640) Stream added, broadcasting: 1\nI0701 07:33:53.293348 111 log.go:172] (0xc000778160) Reply frame received for 1\nI0701 07:33:53.293382 111 log.go:172] (0xc000778160) (0xc0000d4dc0) Create stream\nI0701 07:33:53.293395 111 log.go:172] (0xc000778160) (0xc0000d4dc0) Stream added, broadcasting: 3\nI0701 07:33:53.294390 111 log.go:172] (0xc000778160) Reply frame received for 3\nI0701 07:33:53.294445 111 log.go:172] (0xc000778160) (0xc00002a000) Create stream\nI0701 07:33:53.294461 111 log.go:172] (0xc000778160) (0xc00002a000) Stream added, broadcasting: 5\nI0701 07:33:53.295342 111 log.go:172] (0xc000778160) Reply frame received for 5\nI0701 07:33:53.383688 111 log.go:172] (0xc000778160) Data frame received for 3\nI0701 07:33:53.383732 111 log.go:172] (0xc0000d4dc0) (3) Data frame handling\nI0701 07:33:53.383751 111 log.go:172] (0xc0000d4dc0) (3) Data frame sent\nI0701 07:33:53.383759 111 log.go:172] (0xc000778160) Data frame received for 3\nI0701 07:33:53.383765 111 log.go:172] (0xc0000d4dc0) (3) Data frame handling\nI0701 07:33:53.383798 111 log.go:172] (0xc000778160) Data frame received for 5\nI0701 07:33:53.383824 111 log.go:172] (0xc00002a000) (5) Data frame handling\nI0701 07:33:53.385830 111 log.go:172] (0xc000778160) Data frame received for 1\nI0701 07:33:53.385842 111 log.go:172] (0xc0006a4640) (1) Data frame handling\nI0701 07:33:53.385847 111 log.go:172] (0xc0006a4640) (1) Data frame sent\nI0701 07:33:53.385859 111 log.go:172] (0xc000778160) (0xc0006a4640) Stream removed, broadcasting: 1\nI0701 07:33:53.385931 111 log.go:172] (0xc000778160) Go away received\nI0701 07:33:53.386018 111 log.go:172] (0xc000778160) (0xc0006a4640) Stream removed, broadcasting: 1\nI0701 07:33:53.386035 111 log.go:172] (0xc000778160) (0xc0000d4dc0) Stream removed, broadcasting: 3\nI0701 07:33:53.386045 111 log.go:172] (0xc000778160) (0xc00002a000) Stream removed, broadcasting: 5\n" Jul 1 07:33:53.390: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 07:33:53.390: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 07:34:03.413: INFO: Waiting for StatefulSet e2e-tests-statefulset-csfv2/ss2 to complete update Jul 1 07:34:03.413: INFO: Waiting for Pod e2e-tests-statefulset-csfv2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 1 07:34:03.413: INFO: Waiting for Pod e2e-tests-statefulset-csfv2/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 1 07:34:13.460: INFO: Waiting for StatefulSet e2e-tests-statefulset-csfv2/ss2 to complete update Jul 1 07:34:13.460: INFO: Waiting for Pod e2e-tests-statefulset-csfv2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 1 07:34:23.454: INFO: Waiting for StatefulSet e2e-tests-statefulset-csfv2/ss2 to complete update STEP: Rolling back to a previous revision Jul 1 07:34:33.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-csfv2 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 07:34:33.716: INFO: stderr: "I0701 07:34:33.561304 133 log.go:172] (0xc000154840) (0xc000736640) Create stream\nI0701 07:34:33.561376 133 log.go:172] (0xc000154840) (0xc000736640) Stream added, broadcasting: 1\nI0701 07:34:33.564523 133 log.go:172] (0xc000154840) Reply frame received for 1\nI0701 07:34:33.564571 133 log.go:172] (0xc000154840) (0xc00067adc0) Create stream\nI0701 07:34:33.564584 133 log.go:172] (0xc000154840) (0xc00067adc0) Stream added, broadcasting: 3\nI0701 07:34:33.565953 133 log.go:172] (0xc000154840) Reply frame received for 3\nI0701 07:34:33.566025 133 log.go:172] (0xc000154840) (0xc00067af00) Create stream\nI0701 07:34:33.566059 133 log.go:172] (0xc000154840) (0xc00067af00) Stream added, broadcasting: 5\nI0701 07:34:33.567218 133 log.go:172] (0xc000154840) Reply frame received for 5\nI0701 07:34:33.707861 133 log.go:172] (0xc000154840) Data frame received for 3\nI0701 07:34:33.707896 133 log.go:172] (0xc00067adc0) (3) Data frame handling\nI0701 07:34:33.707919 133 log.go:172] (0xc00067adc0) (3) Data frame sent\nI0701 07:34:33.707926 133 log.go:172] (0xc000154840) Data frame received for 3\nI0701 07:34:33.707931 133 log.go:172] (0xc00067adc0) (3) Data frame handling\nI0701 07:34:33.708306 133 log.go:172] (0xc000154840) Data frame received for 5\nI0701 07:34:33.708336 133 log.go:172] (0xc00067af00) (5) Data frame handling\nI0701 07:34:33.710466 133 log.go:172] (0xc000154840) Data frame received for 1\nI0701 07:34:33.710497 133 log.go:172] (0xc000736640) (1) Data frame handling\nI0701 07:34:33.710522 133 log.go:172] (0xc000736640) (1) Data frame sent\nI0701 07:34:33.710548 133 log.go:172] (0xc000154840) (0xc000736640) Stream removed, broadcasting: 1\nI0701 07:34:33.710724 133 log.go:172] (0xc000154840) Go away received\nI0701 07:34:33.710871 133 log.go:172] (0xc000154840) (0xc000736640) Stream removed, broadcasting: 1\nI0701 07:34:33.710904 133 log.go:172] (0xc000154840) (0xc00067adc0) Stream removed, broadcasting: 3\nI0701 07:34:33.710924 133 log.go:172] (0xc000154840) (0xc00067af00) Stream removed, broadcasting: 5\n" Jul 1 07:34:33.716: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 07:34:33.716: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 07:34:43.813: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 1 07:34:53.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-csfv2 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 07:34:54.071: INFO: stderr: "I0701 07:34:53.978882 155 log.go:172] (0xc00014c840) (0xc00066d400) Create stream\nI0701 07:34:53.978945 155 log.go:172] (0xc00014c840) (0xc00066d400) Stream added, broadcasting: 1\nI0701 07:34:53.981461 155 log.go:172] (0xc00014c840) Reply frame received for 1\nI0701 07:34:53.981607 155 log.go:172] (0xc00014c840) (0xc00066d4a0) Create stream\nI0701 07:34:53.981637 155 log.go:172] (0xc00014c840) (0xc00066d4a0) Stream added, broadcasting: 3\nI0701 07:34:53.982795 155 log.go:172] (0xc00014c840) Reply frame received for 3\nI0701 07:34:53.982829 155 log.go:172] (0xc00014c840) (0xc00066d540) Create stream\nI0701 07:34:53.982839 155 log.go:172] (0xc00014c840) (0xc00066d540) Stream added, broadcasting: 5\nI0701 07:34:53.983810 155 log.go:172] (0xc00014c840) Reply frame received for 5\nI0701 07:34:54.062364 155 log.go:172] (0xc00014c840) Data frame received for 5\nI0701 07:34:54.062436 155 log.go:172] (0xc00066d540) (5) Data frame handling\nI0701 07:34:54.062468 155 log.go:172] (0xc00014c840) Data frame received for 3\nI0701 07:34:54.062483 155 log.go:172] (0xc00066d4a0) (3) Data frame handling\nI0701 07:34:54.062500 155 log.go:172] (0xc00066d4a0) (3) Data frame sent\nI0701 07:34:54.062528 155 log.go:172] (0xc00014c840) Data frame received for 3\nI0701 07:34:54.062542 155 log.go:172] (0xc00066d4a0) (3) Data frame handling\nI0701 07:34:54.063695 155 log.go:172] (0xc00014c840) Data frame received for 1\nI0701 07:34:54.063723 155 log.go:172] (0xc00066d400) (1) Data frame handling\nI0701 07:34:54.063733 155 log.go:172] (0xc00066d400) (1) Data frame sent\nI0701 07:34:54.063760 155 log.go:172] (0xc00014c840) (0xc00066d400) Stream removed, broadcasting: 1\nI0701 07:34:54.063801 155 log.go:172] (0xc00014c840) Go away received\nI0701 07:34:54.063964 155 log.go:172] (0xc00014c840) (0xc00066d400) Stream removed, broadcasting: 1\nI0701 07:34:54.063984 155 log.go:172] (0xc00014c840) (0xc00066d4a0) Stream removed, broadcasting: 3\nI0701 07:34:54.063994 155 log.go:172] (0xc00014c840) (0xc00066d540) Stream removed, broadcasting: 5\n" Jul 1 07:34:54.071: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 07:34:54.071: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 07:35:04.090: INFO: Waiting for StatefulSet e2e-tests-statefulset-csfv2/ss2 to complete update Jul 1 07:35:04.090: INFO: Waiting for Pod e2e-tests-statefulset-csfv2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 1 07:35:04.090: INFO: Waiting for Pod e2e-tests-statefulset-csfv2/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 1 07:35:14.101: INFO: Waiting for StatefulSet e2e-tests-statefulset-csfv2/ss2 to complete update Jul 1 07:35:14.101: INFO: Waiting for Pod e2e-tests-statefulset-csfv2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 1 07:35:24.105: INFO: Waiting for StatefulSet e2e-tests-statefulset-csfv2/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 07:35:34.099: INFO: Deleting all statefulset in ns e2e-tests-statefulset-csfv2 Jul 1 07:35:34.102: INFO: Scaling statefulset ss2 to 0 Jul 1 07:35:54.457: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 07:35:54.498: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:35:54.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-csfv2" for this suite. Jul 1 07:36:00.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:36:00.756: INFO: namespace: e2e-tests-statefulset-csfv2, resource: bindings, ignored listing per whitelist Jul 1 07:36:00.782: INFO: namespace e2e-tests-statefulset-csfv2 deletion completed in 6.152030827s • [SLOW TEST:168.279 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:36:00.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 1 07:36:00.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:00.932: INFO: Number of nodes with available pods: 0 Jul 1 07:36:00.932: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:36:01.939: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:01.942: INFO: Number of nodes with available pods: 0 Jul 1 07:36:01.942: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:36:03.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:03.506: INFO: Number of nodes with available pods: 0 Jul 1 07:36:03.507: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:36:03.938: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:03.941: INFO: Number of nodes with available pods: 0 Jul 1 07:36:03.941: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:36:04.936: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:04.954: INFO: Number of nodes with available pods: 0 Jul 1 07:36:04.954: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:36:05.938: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:05.942: INFO: Number of nodes with available pods: 2 Jul 1 07:36:05.942: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 1 07:36:05.973: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 07:36:05.991: INFO: Number of nodes with available pods: 2 Jul 1 07:36:05.991: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pc5ff, will wait for the garbage collector to delete the pods Jul 1 07:36:07.094: INFO: Deleting DaemonSet.extensions daemon-set took: 38.55743ms Jul 1 07:36:07.294: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.216829ms Jul 1 07:36:10.598: INFO: Number of nodes with available pods: 0 Jul 1 07:36:10.598: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 07:36:10.601: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pc5ff/daemonsets","resourceVersion":"18819872"},"items":null} Jul 1 07:36:10.604: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pc5ff/pods","resourceVersion":"18819873"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:36:10.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-pc5ff" for this suite. Jul 1 07:36:16.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:36:16.707: INFO: namespace: e2e-tests-daemonsets-pc5ff, resource: bindings, ignored listing per whitelist Jul 1 07:36:16.720: INFO: namespace e2e-tests-daemonsets-pc5ff deletion completed in 6.104804752s • [SLOW TEST:15.938 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:36:16.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-w7dt STEP: Creating a pod to test atomic-volume-subpath Jul 1 07:36:16.885: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-w7dt" in namespace "e2e-tests-subpath-dckmc" to be "success or failure" Jul 1 07:36:16.894: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.436093ms Jul 1 07:36:18.954: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069759393s Jul 1 07:36:20.957: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072692338s Jul 1 07:36:22.962: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=true. Elapsed: 6.077159921s Jul 1 07:36:24.965: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 8.08070832s Jul 1 07:36:26.970: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 10.085217987s Jul 1 07:36:28.974: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 12.089691292s Jul 1 07:36:30.979: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 14.094770726s Jul 1 07:36:32.984: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 16.099222516s Jul 1 07:36:34.987: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 18.102826894s Jul 1 07:36:36.992: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 20.107234188s Jul 1 07:36:38.996: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 22.111594586s Jul 1 07:36:41.000: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Running", Reason="", readiness=false. Elapsed: 24.115074167s Jul 1 07:36:43.004: INFO: Pod "pod-subpath-test-secret-w7dt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.119750887s STEP: Saw pod success Jul 1 07:36:43.004: INFO: Pod "pod-subpath-test-secret-w7dt" satisfied condition "success or failure" Jul 1 07:36:43.008: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-w7dt container test-container-subpath-secret-w7dt: STEP: delete the pod Jul 1 07:36:43.050: INFO: Waiting for pod pod-subpath-test-secret-w7dt to disappear Jul 1 07:36:43.067: INFO: Pod pod-subpath-test-secret-w7dt no longer exists STEP: Deleting pod pod-subpath-test-secret-w7dt Jul 1 07:36:43.067: INFO: Deleting pod "pod-subpath-test-secret-w7dt" in namespace "e2e-tests-subpath-dckmc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:36:43.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dckmc" for this suite. Jul 1 07:36:49.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:36:49.191: INFO: namespace: e2e-tests-subpath-dckmc, resource: bindings, ignored listing per whitelist Jul 1 07:36:49.222: INFO: namespace e2e-tests-subpath-dckmc deletion completed in 6.123796038s • [SLOW TEST:32.501 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:36:49.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 07:36:49.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-q4n9s" to be "success or failure" Jul 1 07:36:49.362: INFO: Pod "downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.591637ms Jul 1 07:36:51.367: INFO: Pod "downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014953968s Jul 1 07:36:53.372: INFO: Pod "downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020144379s STEP: Saw pod success Jul 1 07:36:53.372: INFO: Pod "downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:36:53.378: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 07:36:53.435: INFO: Waiting for pod downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018 to disappear Jul 1 07:36:53.493: INFO: Pod downwardapi-volume-9faff9e9-bb6d-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:36:53.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q4n9s" for this suite. Jul 1 07:36:59.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:36:59.539: INFO: namespace: e2e-tests-downward-api-q4n9s, resource: bindings, ignored listing per whitelist Jul 1 07:36:59.600: INFO: namespace e2e-tests-downward-api-q4n9s deletion completed in 6.102885256s • [SLOW TEST:10.379 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:36:59.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 07:37:04.302: INFO: Successfully updated pod "labelsupdatea5e8076f-bb6d-11ea-a133-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:37:06.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b7shm" for this suite. Jul 1 07:37:26.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:37:26.473: INFO: namespace: e2e-tests-projected-b7shm, resource: bindings, ignored listing per whitelist Jul 1 07:37:26.500: INFO: namespace e2e-tests-projected-b7shm deletion completed in 20.134831263s • [SLOW TEST:26.899 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:37:26.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 07:37:26.574: INFO: Creating ReplicaSet my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018 Jul 1 07:37:26.643: INFO: Pod name my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018: Found 0 pods out of 1 Jul 1 07:37:31.648: INFO: Pod name my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018: Found 1 pods out of 1 Jul 1 07:37:31.649: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018" is running Jul 1 07:37:31.652: INFO: Pod "my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018-xwcjc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:37:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:37:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:37:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:37:26 +0000 UTC Reason: Message:}]) Jul 1 07:37:31.652: INFO: Trying to dial the pod Jul 1 07:37:36.666: INFO: Controller my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018: Got expected result from replica 1 [my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018-xwcjc]: "my-hostname-basic-b5e51cea-bb6d-11ea-a133-0242ac110018-xwcjc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:37:36.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-m6sq6" for this suite. Jul 1 07:37:42.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:37:42.760: INFO: namespace: e2e-tests-replicaset-m6sq6, resource: bindings, ignored listing per whitelist Jul 1 07:37:42.768: INFO: namespace e2e-tests-replicaset-m6sq6 deletion completed in 6.099100532s • [SLOW TEST:16.267 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:37:42.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0701 07:38:22.960581 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 07:38:22.960: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:38:22.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-spzb7" for this suite. Jul 1 07:38:30.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:38:31.046: INFO: namespace: e2e-tests-gc-spzb7, resource: bindings, ignored listing per whitelist Jul 1 07:38:31.056: INFO: namespace e2e-tests-gc-spzb7 deletion completed in 8.092159891s • [SLOW TEST:48.287 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:38:31.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 07:38:31.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-q4bbg" to be "success or failure" Jul 1 07:38:31.390: INFO: Pod "downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 72.956906ms Jul 1 07:38:33.394: INFO: Pod "downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077479829s Jul 1 07:38:35.399: INFO: Pod "downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082118536s STEP: Saw pod success Jul 1 07:38:35.399: INFO: Pod "downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:38:35.402: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 07:38:35.427: INFO: Waiting for pod downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018 to disappear Jul 1 07:38:35.431: INFO: Pod downwardapi-volume-dc709ef8-bb6d-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:38:35.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q4bbg" for this suite. Jul 1 07:38:41.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:38:41.463: INFO: namespace: e2e-tests-projected-q4bbg, resource: bindings, ignored listing per whitelist Jul 1 07:38:41.523: INFO: namespace e2e-tests-projected-q4bbg deletion completed in 6.088921509s • [SLOW TEST:10.467 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:38:41.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jul 1 07:38:41.619: INFO: Waiting up to 5m0s for pod "var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018" in namespace "e2e-tests-var-expansion-2766l" to be "success or failure" Jul 1 07:38:41.623: INFO: Pod "var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801572ms Jul 1 07:38:43.627: INFO: Pod "var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008357528s Jul 1 07:38:45.632: INFO: Pod "var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012908445s STEP: Saw pod success Jul 1 07:38:45.632: INFO: Pod "var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:38:45.635: INFO: Trying to get logs from node hunter-worker pod var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 07:38:45.702: INFO: Waiting for pod var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018 to disappear Jul 1 07:38:45.713: INFO: Pod var-expansion-e29e86b7-bb6d-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:38:45.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-2766l" for this suite. Jul 1 07:38:51.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:38:51.794: INFO: namespace: e2e-tests-var-expansion-2766l, resource: bindings, ignored listing per whitelist Jul 1 07:38:51.855: INFO: namespace e2e-tests-var-expansion-2766l deletion completed in 6.138264693s • [SLOW TEST:10.331 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:38:51.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 1 07:38:55.970: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e8c6402c-bb6d-11ea-a133-0242ac110018,GenerateName:,Namespace:e2e-tests-events-knxsx,SelfLink:/api/v1/namespaces/e2e-tests-events-knxsx/pods/send-events-e8c6402c-bb6d-11ea-a133-0242ac110018,UID:e8c9081d-bb6d-11ea-99e8-0242ac110002,ResourceVersion:18820572,Generation:0,CreationTimestamp:2020-07-01 07:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 935753271,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6jbcs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6jbcs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-6jbcs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017c14c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017c14e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 07:38:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 07:38:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 07:38:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 07:38:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.163,StartTime:2020-07-01 07:38:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-01 07:38:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://92cac4e0812bda16ad11137a9750cf62f545326e896eceead0368972f0a97efa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 1 07:38:57.975: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 1 07:38:59.980: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:38:59.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-knxsx" for this suite. Jul 1 07:39:42.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:39:42.052: INFO: namespace: e2e-tests-events-knxsx, resource: bindings, ignored listing per whitelist Jul 1 07:39:42.126: INFO: namespace e2e-tests-events-knxsx deletion completed in 42.129654606s • [SLOW TEST:50.271 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:39:42.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jul 1 07:39:42.241: INFO: Waiting up to 5m0s for pod "client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018" in namespace "e2e-tests-containers-9bw59" to be "success or failure" Jul 1 07:39:42.254: INFO: Pod "client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.459185ms Jul 1 07:39:44.258: INFO: Pod "client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016156214s Jul 1 07:39:46.263: INFO: Pod "client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021216299s STEP: Saw pod success Jul 1 07:39:46.263: INFO: Pod "client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:39:46.266: INFO: Trying to get logs from node hunter-worker pod client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 07:39:46.322: INFO: Waiting for pod client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018 to disappear Jul 1 07:39:46.337: INFO: Pod client-containers-06bec2d7-bb6e-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:39:46.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9bw59" for this suite. Jul 1 07:39:52.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:39:52.405: INFO: namespace: e2e-tests-containers-9bw59, resource: bindings, ignored listing per whitelist Jul 1 07:39:52.434: INFO: namespace e2e-tests-containers-9bw59 deletion completed in 6.09334107s • [SLOW TEST:10.308 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:39:52.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 07:39:52.537: INFO: Waiting up to 5m0s for pod "downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-q6z46" to be "success or failure" Jul 1 07:39:52.541: INFO: Pod "downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941302ms Jul 1 07:39:54.545: INFO: Pod "downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008461629s Jul 1 07:39:56.550: INFO: Pod "downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01362606s STEP: Saw pod success Jul 1 07:39:56.551: INFO: Pod "downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:39:56.554: INFO: Trying to get logs from node hunter-worker2 pod downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 07:39:56.605: INFO: Waiting for pod downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018 to disappear Jul 1 07:39:56.619: INFO: Pod downward-api-0ce310b5-bb6e-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:39:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q6z46" for this suite. Jul 1 07:40:02.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:40:02.708: INFO: namespace: e2e-tests-downward-api-q6z46, resource: bindings, ignored listing per whitelist Jul 1 07:40:02.799: INFO: namespace e2e-tests-downward-api-q6z46 deletion completed in 6.175528236s • [SLOW TEST:10.365 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:40:02.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 07:40:02.974: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:40:07.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-95rz8" for this suite. Jul 1 07:40:57.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:40:57.130: INFO: namespace: e2e-tests-pods-95rz8, resource: bindings, ignored listing per whitelist Jul 1 07:40:57.168: INFO: namespace e2e-tests-pods-95rz8 deletion completed in 50.130179099s • [SLOW TEST:54.369 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:40:57.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 1 07:40:57.269: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 07:40:57.277: INFO: Waiting for terminating namespaces to be deleted... Jul 1 07:40:57.279: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 1 07:40:57.285: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jul 1 07:40:57.285: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 07:40:57.285: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 07:40:57.285: INFO: Container kindnet-cni ready: true, restart count 0 Jul 1 07:40:57.285: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jul 1 07:40:57.285: INFO: Container coredns ready: true, restart count 0 Jul 1 07:40:57.285: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 1 07:40:57.290: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jul 1 07:40:57.290: INFO: Container coredns ready: true, restart count 0 Jul 1 07:40:57.290: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 07:40:57.290: INFO: Container kindnet-cni ready: true, restart count 0 Jul 1 07:40:57.290: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 07:40:57.290: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jul 1 07:40:57.384: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Jul 1 07:40:57.384: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Jul 1 07:40:57.384: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Jul 1 07:40:57.384: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Jul 1 07:40:57.384: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Jul 1 07:40:57.384: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-338c4254-bb6e-11ea-a133-0242ac110018.161d90fc852c098b], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zwzpc/filler-pod-338c4254-bb6e-11ea-a133-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-338c4254-bb6e-11ea-a133-0242ac110018.161d90fd0a5f196f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-338c4254-bb6e-11ea-a133-0242ac110018.161d90fd45c87509], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-338c4254-bb6e-11ea-a133-0242ac110018.161d90fd5544cfb6], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-338d1c15-bb6e-11ea-a133-0242ac110018.161d90fc852bd997], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zwzpc/filler-pod-338d1c15-bb6e-11ea-a133-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-338d1c15-bb6e-11ea-a133-0242ac110018.161d90fcd26a1c49], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-338d1c15-bb6e-11ea-a133-0242ac110018.161d90fd12570ee2], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-338d1c15-bb6e-11ea-a133-0242ac110018.161d90fd2b0e106a], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.161d90fd749ba210], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:41:02.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-zwzpc" for this suite. Jul 1 07:41:10.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:41:10.685: INFO: namespace: e2e-tests-sched-pred-zwzpc, resource: bindings, ignored listing per whitelist Jul 1 07:41:10.722: INFO: namespace e2e-tests-sched-pred-zwzpc deletion completed in 8.10454988s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.554 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:41:10.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:41:40.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-7gj9g" for this suite. Jul 1 07:41:46.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:41:46.540: INFO: namespace: e2e-tests-container-runtime-7gj9g, resource: bindings, ignored listing per whitelist Jul 1 07:41:46.606: INFO: namespace e2e-tests-container-runtime-7gj9g deletion completed in 6.110159403s • [SLOW TEST:35.883 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:41:46.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018 Jul 1 07:41:46.754: INFO: Pod name my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018: Found 0 pods out of 1 Jul 1 07:41:51.759: INFO: Pod name my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018: Found 1 pods out of 1 Jul 1 07:41:51.759: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018" are running Jul 1 07:41:51.762: INFO: Pod "my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018-lmd7j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:41:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:41:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:41:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 07:41:46 +0000 UTC Reason: Message:}]) Jul 1 07:41:51.762: INFO: Trying to dial the pod Jul 1 07:41:56.772: INFO: Controller my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018: Got expected result from replica 1 [my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018-lmd7j]: "my-hostname-basic-50f52b50-bb6e-11ea-a133-0242ac110018-lmd7j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:41:56.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-rcfkb" for this suite. Jul 1 07:42:02.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:42:02.904: INFO: namespace: e2e-tests-replication-controller-rcfkb, resource: bindings, ignored listing per whitelist Jul 1 07:42:02.906: INFO: namespace e2e-tests-replication-controller-rcfkb deletion completed in 6.114859368s • [SLOW TEST:16.300 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:42:02.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:42:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8wvbd" for this suite. Jul 1 07:42:53.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:42:53.135: INFO: namespace: e2e-tests-kubelet-test-8wvbd, resource: bindings, ignored listing per whitelist Jul 1 07:42:53.164: INFO: namespace e2e-tests-kubelet-test-8wvbd deletion completed in 46.127465478s • [SLOW TEST:50.257 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:42:53.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 07:42:53.303: INFO: Waiting up to 5m0s for pod "pod-78a309fa-bb6e-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-tlwjl" to be "success or failure" Jul 1 07:42:53.318: INFO: Pod "pod-78a309fa-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.269673ms Jul 1 07:42:55.322: INFO: Pod "pod-78a309fa-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019693409s Jul 1 07:42:57.327: INFO: Pod "pod-78a309fa-bb6e-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024082794s STEP: Saw pod success Jul 1 07:42:57.327: INFO: Pod "pod-78a309fa-bb6e-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:42:57.329: INFO: Trying to get logs from node hunter-worker pod pod-78a309fa-bb6e-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 07:42:57.393: INFO: Waiting for pod pod-78a309fa-bb6e-11ea-a133-0242ac110018 to disappear Jul 1 07:42:57.402: INFO: Pod pod-78a309fa-bb6e-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:42:57.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tlwjl" for this suite. Jul 1 07:43:03.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:43:03.484: INFO: namespace: e2e-tests-emptydir-tlwjl, resource: bindings, ignored listing per whitelist Jul 1 07:43:03.501: INFO: namespace e2e-tests-emptydir-tlwjl deletion completed in 6.095384186s • [SLOW TEST:10.336 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:43:03.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jul 1 07:43:03.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jul 1 07:43:03.776: INFO: stderr: "" Jul 1 07:43:03.776: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:43:03.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2j92c" for this suite. Jul 1 07:43:09.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:43:09.899: INFO: namespace: e2e-tests-kubectl-2j92c, resource: bindings, ignored listing per whitelist Jul 1 07:43:09.903: INFO: namespace e2e-tests-kubectl-2j92c deletion completed in 6.121834658s • [SLOW TEST:6.402 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:43:09.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-82a03603-bb6e-11ea-a133-0242ac110018 STEP: Creating secret with name s-test-opt-upd-82a0366e-bb6e-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-82a03603-bb6e-11ea-a133-0242ac110018 STEP: Updating secret s-test-opt-upd-82a0366e-bb6e-11ea-a133-0242ac110018 STEP: Creating secret with name s-test-opt-create-82a0368d-bb6e-11ea-a133-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:43:18.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mg482" for this suite. Jul 1 07:43:42.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:43:42.360: INFO: namespace: e2e-tests-secrets-mg482, resource: bindings, ignored listing per whitelist Jul 1 07:43:42.379: INFO: namespace e2e-tests-secrets-mg482 deletion completed in 24.152021766s • [SLOW TEST:32.476 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:43:42.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-96093ade-bb6e-11ea-a133-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-96093ac7-bb6e-11ea-a133-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 1 07:43:42.663: INFO: Waiting up to 5m0s for pod "projected-volume-96093a86-bb6e-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-hvjpw" to be "success or failure" Jul 1 07:43:42.680: INFO: Pod "projected-volume-96093a86-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.450085ms Jul 1 07:43:44.686: INFO: Pod "projected-volume-96093a86-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022273599s Jul 1 07:43:46.835: INFO: Pod "projected-volume-96093a86-bb6e-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172202436s STEP: Saw pod success Jul 1 07:43:46.836: INFO: Pod "projected-volume-96093a86-bb6e-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:43:46.838: INFO: Trying to get logs from node hunter-worker pod projected-volume-96093a86-bb6e-11ea-a133-0242ac110018 container projected-all-volume-test: STEP: delete the pod Jul 1 07:43:46.897: INFO: Waiting for pod projected-volume-96093a86-bb6e-11ea-a133-0242ac110018 to disappear Jul 1 07:43:46.906: INFO: Pod projected-volume-96093a86-bb6e-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:43:46.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hvjpw" for this suite. Jul 1 07:43:52.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:43:52.938: INFO: namespace: e2e-tests-projected-hvjpw, resource: bindings, ignored listing per whitelist Jul 1 07:43:53.000: INFO: namespace e2e-tests-projected-hvjpw deletion completed in 6.090690014s • [SLOW TEST:10.621 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:43:53.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jul 1 07:43:57.134: INFO: Pod pod-hostip-9c4a23f4-bb6e-11ea-a133-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:43:57.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rn6j2" for this suite. Jul 1 07:44:19.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:44:19.230: INFO: namespace: e2e-tests-pods-rn6j2, resource: bindings, ignored listing per whitelist Jul 1 07:44:19.242: INFO: namespace e2e-tests-pods-rn6j2 deletion completed in 22.104384233s • [SLOW TEST:26.242 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:44:19.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-wwwgt/configmap-test-abed23ef-bb6e-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 07:44:19.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-wwwgt" to be "success or failure" Jul 1 07:44:19.416: INFO: Pod "pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.966528ms Jul 1 07:44:21.421: INFO: Pod "pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048969558s Jul 1 07:44:23.426: INFO: Pod "pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053733969s STEP: Saw pod success Jul 1 07:44:23.426: INFO: Pod "pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:44:23.429: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018 container env-test: STEP: delete the pod Jul 1 07:44:23.454: INFO: Waiting for pod pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018 to disappear Jul 1 07:44:23.458: INFO: Pod pod-configmaps-abedce06-bb6e-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:44:23.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wwwgt" for this suite. Jul 1 07:44:29.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:44:29.500: INFO: namespace: e2e-tests-configmap-wwwgt, resource: bindings, ignored listing per whitelist Jul 1 07:44:29.548: INFO: namespace e2e-tests-configmap-wwwgt deletion completed in 6.087186109s • [SLOW TEST:10.305 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:44:29.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jul 1 07:44:29.701: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix314844625/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:44:29.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sn2lg" for this suite. Jul 1 07:44:35.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:44:35.864: INFO: namespace: e2e-tests-kubectl-sn2lg, resource: bindings, ignored listing per whitelist Jul 1 07:44:35.916: INFO: namespace e2e-tests-kubectl-sn2lg deletion completed in 6.140560069s • [SLOW TEST:6.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:44:35.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 07:44:36.028: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 1 07:44:36.034: INFO: Number of nodes with available pods: 0 Jul 1 07:44:36.034: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 1 07:44:36.122: INFO: Number of nodes with available pods: 0 Jul 1 07:44:36.122: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:37.126: INFO: Number of nodes with available pods: 0 Jul 1 07:44:37.126: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:38.188: INFO: Number of nodes with available pods: 0 Jul 1 07:44:38.188: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:39.135: INFO: Number of nodes with available pods: 0 Jul 1 07:44:39.135: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:40.126: INFO: Number of nodes with available pods: 1 Jul 1 07:44:40.126: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 1 07:44:40.160: INFO: Number of nodes with available pods: 1 Jul 1 07:44:40.160: INFO: Number of running nodes: 0, number of available pods: 1 Jul 1 07:44:41.165: INFO: Number of nodes with available pods: 0 Jul 1 07:44:41.165: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 1 07:44:41.178: INFO: Number of nodes with available pods: 0 Jul 1 07:44:41.178: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:42.182: INFO: Number of nodes with available pods: 0 Jul 1 07:44:42.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:43.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:43.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:44.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:44.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:45.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:45.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:46.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:46.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:47.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:47.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:48.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:48.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:49.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:49.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:50.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:50.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:51.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:51.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:52.182: INFO: Number of nodes with available pods: 0 Jul 1 07:44:52.182: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:53.262: INFO: Number of nodes with available pods: 0 Jul 1 07:44:53.262: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:54.183: INFO: Number of nodes with available pods: 0 Jul 1 07:44:54.183: INFO: Node hunter-worker is running more than one daemon pod Jul 1 07:44:55.183: INFO: Number of nodes with available pods: 1 Jul 1 07:44:55.183: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2bwwg, will wait for the garbage collector to delete the pods Jul 1 07:44:55.248: INFO: Deleting DaemonSet.extensions daemon-set took: 6.356465ms Jul 1 07:44:55.348: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.217733ms Jul 1 07:44:59.560: INFO: Number of nodes with available pods: 0 Jul 1 07:44:59.560: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 07:44:59.563: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2bwwg/daemonsets","resourceVersion":"18821750"},"items":null} Jul 1 07:44:59.565: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2bwwg/pods","resourceVersion":"18821750"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:44:59.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2bwwg" for this suite. Jul 1 07:45:05.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:45:05.677: INFO: namespace: e2e-tests-daemonsets-2bwwg, resource: bindings, ignored listing per whitelist Jul 1 07:45:05.705: INFO: namespace e2e-tests-daemonsets-2bwwg deletion completed in 6.104154672s • [SLOW TEST:29.789 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:45:05.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c79ef70e-bb6e-11ea-a133-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-c79ef773-bb6e-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c79ef70e-bb6e-11ea-a133-0242ac110018 STEP: Updating configmap cm-test-opt-upd-c79ef773-bb6e-11ea-a133-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-c79ef79f-bb6e-11ea-a133-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:45:16.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7q6sd" for this suite. Jul 1 07:45:38.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:45:38.074: INFO: namespace: e2e-tests-projected-7q6sd, resource: bindings, ignored listing per whitelist Jul 1 07:45:38.209: INFO: namespace e2e-tests-projected-7q6sd deletion completed in 22.175148776s • [SLOW TEST:32.503 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:45:38.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 07:45:38.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-jc8mb" to be "success or failure" Jul 1 07:45:38.348: INFO: Pod "downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389766ms Jul 1 07:45:40.351: INFO: Pod "downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011568092s Jul 1 07:45:42.355: INFO: Pod "downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015822228s STEP: Saw pod success Jul 1 07:45:42.355: INFO: Pod "downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:45:42.359: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 07:45:42.378: INFO: Waiting for pod downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018 to disappear Jul 1 07:45:42.429: INFO: Pod downwardapi-volume-db01dac1-bb6e-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:45:42.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jc8mb" for this suite. Jul 1 07:45:48.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:45:48.486: INFO: namespace: e2e-tests-downward-api-jc8mb, resource: bindings, ignored listing per whitelist Jul 1 07:45:48.522: INFO: namespace e2e-tests-downward-api-jc8mb deletion completed in 6.088792102s • [SLOW TEST:10.313 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:45:48.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-5w8xz STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5w8xz to expose endpoints map[] Jul 1 07:45:48.712: INFO: Get endpoints failed (13.297797ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 1 07:45:49.722: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5w8xz exposes endpoints map[] (1.022982874s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-5w8xz STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5w8xz to expose endpoints map[pod1:[80]] Jul 1 07:45:52.875: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5w8xz exposes endpoints map[pod1:[80]] (3.147931779s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-5w8xz STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5w8xz to expose endpoints map[pod1:[80] pod2:[80]] Jul 1 07:45:55.950: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5w8xz exposes endpoints map[pod1:[80] pod2:[80]] (3.070575777s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-5w8xz STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5w8xz to expose endpoints map[pod2:[80]] Jul 1 07:45:56.977: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5w8xz exposes endpoints map[pod2:[80]] (1.021936933s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-5w8xz STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5w8xz to expose endpoints map[] Jul 1 07:45:57.992: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5w8xz exposes endpoints map[] (1.01163191s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:45:58.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5w8xz" for this suite. Jul 1 07:46:04.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:46:04.283: INFO: namespace: e2e-tests-services-5w8xz, resource: bindings, ignored listing per whitelist Jul 1 07:46:04.350: INFO: namespace e2e-tests-services-5w8xz deletion completed in 6.134492406s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:15.828 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:46:04.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 07:46:09.037: INFO: Successfully updated pod "labelsupdateea982cb7-bb6e-11ea-a133-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:46:11.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pgsqn" for this suite. Jul 1 07:46:33.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:46:33.153: INFO: namespace: e2e-tests-downward-api-pgsqn, resource: bindings, ignored listing per whitelist Jul 1 07:46:33.158: INFO: namespace e2e-tests-downward-api-pgsqn deletion completed in 22.097754305s • [SLOW TEST:28.807 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:46:33.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 07:46:33.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hzp6f' Jul 1 07:46:35.613: INFO: stderr: "" Jul 1 07:46:35.613: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jul 1 07:46:35.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hzp6f' Jul 1 07:46:41.268: INFO: stderr: "" Jul 1 07:46:41.268: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:46:41.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hzp6f" for this suite. Jul 1 07:46:47.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:46:47.321: INFO: namespace: e2e-tests-kubectl-hzp6f, resource: bindings, ignored listing per whitelist Jul 1 07:46:47.362: INFO: namespace e2e-tests-kubectl-hzp6f deletion completed in 6.090565153s • [SLOW TEST:14.203 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:46:47.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-04371dc7-bb6f-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 07:46:47.496: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-rsfts" to be "success or failure" Jul 1 07:46:47.553: INFO: Pod "pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.604265ms Jul 1 07:46:49.557: INFO: Pod "pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060720225s Jul 1 07:46:51.562: INFO: Pod "pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065521094s STEP: Saw pod success Jul 1 07:46:51.562: INFO: Pod "pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:46:51.565: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jul 1 07:46:51.608: INFO: Waiting for pod pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018 to disappear Jul 1 07:46:51.639: INFO: Pod pod-projected-secrets-0439e817-bb6f-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:46:51.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rsfts" for this suite. Jul 1 07:46:57.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:46:57.709: INFO: namespace: e2e-tests-projected-rsfts, resource: bindings, ignored listing per whitelist Jul 1 07:46:57.729: INFO: namespace e2e-tests-projected-rsfts deletion completed in 6.084956726s • [SLOW TEST:10.367 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:46:57.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-0a66a080-bb6f-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 07:46:57.859: INFO: Waiting up to 5m0s for pod "pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-kcv2v" to be "success or failure" Jul 1 07:46:57.877: INFO: Pod "pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.73693ms Jul 1 07:46:59.881: INFO: Pod "pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022000392s Jul 1 07:47:01.886: INFO: Pod "pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0266812s STEP: Saw pod success Jul 1 07:47:01.886: INFO: Pod "pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:47:01.889: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 07:47:01.947: INFO: Waiting for pod pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018 to disappear Jul 1 07:47:01.956: INFO: Pod pod-secrets-0a673a8c-bb6f-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:47:01.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kcv2v" for this suite. Jul 1 07:47:07.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:47:07.986: INFO: namespace: e2e-tests-secrets-kcv2v, resource: bindings, ignored listing per whitelist Jul 1 07:47:08.062: INFO: namespace e2e-tests-secrets-kcv2v deletion completed in 6.103865131s • [SLOW TEST:10.333 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:47:08.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-m8msd Jul 1 07:47:12.170: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-m8msd STEP: checking the pod's current state and verifying that restartCount is present Jul 1 07:47:12.174: INFO: Initial restart count of pod liveness-http is 0 Jul 1 07:47:24.441: INFO: Restart count of pod e2e-tests-container-probe-m8msd/liveness-http is now 1 (12.267702178s elapsed) Jul 1 07:47:46.676: INFO: Restart count of pod e2e-tests-container-probe-m8msd/liveness-http is now 2 (34.501849064s elapsed) Jul 1 07:48:04.717: INFO: Restart count of pod e2e-tests-container-probe-m8msd/liveness-http is now 3 (52.543530386s elapsed) Jul 1 07:48:24.765: INFO: Restart count of pod e2e-tests-container-probe-m8msd/liveness-http is now 4 (1m12.591759869s elapsed) Jul 1 07:49:45.736: INFO: Restart count of pod e2e-tests-container-probe-m8msd/liveness-http is now 5 (2m33.561860207s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:49:45.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m8msd" for this suite. Jul 1 07:49:51.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:49:51.822: INFO: namespace: e2e-tests-container-probe-m8msd, resource: bindings, ignored listing per whitelist Jul 1 07:49:51.876: INFO: namespace e2e-tests-container-probe-m8msd deletion completed in 6.085986081s • [SLOW TEST:163.813 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:49:51.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jul 1 07:49:51.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:52.325: INFO: stderr: "" Jul 1 07:49:52.325: INFO: stdout: "pod/pause created\n" Jul 1 07:49:52.325: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 1 07:49:52.325: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-v8kkc" to be "running and ready" Jul 1 07:49:52.415: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 89.861766ms Jul 1 07:49:54.419: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093623461s Jul 1 07:49:56.423: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097384571s Jul 1 07:49:58.426: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.101228876s Jul 1 07:49:58.426: INFO: Pod "pause" satisfied condition "running and ready" Jul 1 07:49:58.426: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jul 1 07:49:58.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:58.542: INFO: stderr: "" Jul 1 07:49:58.542: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 1 07:49:58.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:58.644: INFO: stderr: "" Jul 1 07:49:58.644: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 1 07:49:58.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:58.750: INFO: stderr: "" Jul 1 07:49:58.750: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 1 07:49:58.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:58.851: INFO: stderr: "" Jul 1 07:49:58.851: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jul 1 07:49:58.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:58.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 07:49:58.989: INFO: stdout: "pod \"pause\" force deleted\n" Jul 1 07:49:58.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-v8kkc' Jul 1 07:49:59.150: INFO: stderr: "No resources found.\n" Jul 1 07:49:59.150: INFO: stdout: "" Jul 1 07:49:59.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-v8kkc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 07:49:59.255: INFO: stderr: "" Jul 1 07:49:59.255: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:49:59.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v8kkc" for this suite. Jul 1 07:50:07.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:50:07.320: INFO: namespace: e2e-tests-kubectl-v8kkc, resource: bindings, ignored listing per whitelist Jul 1 07:50:07.381: INFO: namespace e2e-tests-kubectl-v8kkc deletion completed in 8.122946685s • [SLOW TEST:15.505 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:50:07.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 07:50:07.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ktgk2' Jul 1 07:50:07.678: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 07:50:07.678: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jul 1 07:50:09.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ktgk2' Jul 1 07:50:09.813: INFO: stderr: "" Jul 1 07:50:09.813: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:50:09.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ktgk2" for this suite. Jul 1 07:50:16.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:50:16.848: INFO: namespace: e2e-tests-kubectl-ktgk2, resource: bindings, ignored listing per whitelist Jul 1 07:50:16.900: INFO: namespace e2e-tests-kubectl-ktgk2 deletion completed in 6.356990212s • [SLOW TEST:9.519 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:50:16.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 07:50:17.009: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-jjpx7" to be "success or failure" Jul 1 07:50:17.038: INFO: Pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.1713ms Jul 1 07:50:19.043: INFO: Pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033590514s Jul 1 07:50:21.518: INFO: Pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508875556s Jul 1 07:50:23.772: INFO: Pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.763144196s Jul 1 07:50:25.775: INFO: Pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.765906196s STEP: Saw pod success Jul 1 07:50:25.775: INFO: Pod "downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:50:25.778: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 07:50:25.844: INFO: Waiting for pod downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018 to disappear Jul 1 07:50:25.853: INFO: Pod downwardapi-volume-81190a77-bb6f-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:50:25.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jjpx7" for this suite. Jul 1 07:50:31.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:50:31.871: INFO: namespace: e2e-tests-projected-jjpx7, resource: bindings, ignored listing per whitelist Jul 1 07:50:31.948: INFO: namespace e2e-tests-projected-jjpx7 deletion completed in 6.092346467s • [SLOW TEST:15.047 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:50:31.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 1 07:50:32.180: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-8kkpn,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kkpn/configmaps/e2e-watch-test-resource-version,UID:8a192b5d-bb6f-11ea-99e8-0242ac110002,ResourceVersion:18822760,Generation:0,CreationTimestamp:2020-07-01 07:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 07:50:32.180: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-8kkpn,SelfLink:/api/v1/namespaces/e2e-tests-watch-8kkpn/configmaps/e2e-watch-test-resource-version,UID:8a192b5d-bb6f-11ea-99e8-0242ac110002,ResourceVersion:18822761,Generation:0,CreationTimestamp:2020-07-01 07:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:50:32.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8kkpn" for this suite. Jul 1 07:50:38.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:50:38.270: INFO: namespace: e2e-tests-watch-8kkpn, resource: bindings, ignored listing per whitelist Jul 1 07:50:38.295: INFO: namespace e2e-tests-watch-8kkpn deletion completed in 6.070724253s • [SLOW TEST:6.347 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:50:38.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-xtbpv Jul 1 07:51:04.417: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-xtbpv STEP: checking the pod's current state and verifying that restartCount is present Jul 1 07:51:04.419: INFO: Initial restart count of pod liveness-exec is 0 Jul 1 07:52:23.511: INFO: Restart count of pod e2e-tests-container-probe-xtbpv/liveness-exec is now 1 (1m19.092160507s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:52:23.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xtbpv" for this suite. Jul 1 07:52:29.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:52:29.642: INFO: namespace: e2e-tests-container-probe-xtbpv, resource: bindings, ignored listing per whitelist Jul 1 07:52:29.691: INFO: namespace e2e-tests-container-probe-xtbpv deletion completed in 6.136160486s • [SLOW TEST:111.395 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:52:29.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 07:52:41.916: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:41.944: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:43.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:43.948: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:45.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:45.948: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:47.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:47.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:49.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:49.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:51.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:51.948: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:53.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:53.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:55.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:55.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:57.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:52:57.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:52:59.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:00.817: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:01.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:01.948: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:03.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:03.981: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:05.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:06.029: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:07.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:07.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:09.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:09.999: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:11.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:11.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 07:53:13.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 07:53:13.957: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:53:13.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jcvtz" for this suite. Jul 1 07:53:36.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:53:36.038: INFO: namespace: e2e-tests-container-lifecycle-hook-jcvtz, resource: bindings, ignored listing per whitelist Jul 1 07:53:36.056: INFO: namespace e2e-tests-container-lifecycle-hook-jcvtz deletion completed in 22.090665372s • [SLOW TEST:66.365 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:53:36.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 07:53:36.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-lf78l' Jul 1 07:53:36.386: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 07:53:36.386: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jul 1 07:53:38.416: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-vbn5b] Jul 1 07:53:38.416: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-vbn5b" in namespace "e2e-tests-kubectl-lf78l" to be "running and ready" Jul 1 07:53:38.418: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.498984ms Jul 1 07:53:40.421: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005690357s Jul 1 07:53:43.371: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.95556226s Jul 1 07:53:45.374: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958564463s Jul 1 07:53:47.376: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960646917s Jul 1 07:53:49.380: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964310558s Jul 1 07:53:52.621: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.20573757s Jul 1 07:53:54.625: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.209498938s Jul 1 07:53:56.628: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212714722s Jul 1 07:53:58.631: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.215193824s Jul 1 07:54:00.634: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.218324556s Jul 1 07:54:02.676: INFO: Pod "e2e-test-nginx-rc-vbn5b": Phase="Running", Reason="", readiness=true. Elapsed: 24.26034119s Jul 1 07:54:02.676: INFO: Pod "e2e-test-nginx-rc-vbn5b" satisfied condition "running and ready" Jul 1 07:54:02.676: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-vbn5b] Jul 1 07:54:02.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lf78l' Jul 1 07:54:02.801: INFO: stderr: "" Jul 1 07:54:02.801: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jul 1 07:54:02.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lf78l' Jul 1 07:54:02.923: INFO: stderr: "" Jul 1 07:54:02.924: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:54:02.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lf78l" for this suite. Jul 1 07:54:26.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:54:26.984: INFO: namespace: e2e-tests-kubectl-lf78l, resource: bindings, ignored listing per whitelist Jul 1 07:54:27.110: INFO: namespace e2e-tests-kubectl-lf78l deletion completed in 24.162012765s • [SLOW TEST:51.054 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:54:27.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jul 1 07:54:27.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:28.660: INFO: stderr: "" Jul 1 07:54:28.660: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 07:54:28.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:28.984: INFO: stderr: "" Jul 1 07:54:28.984: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jul 1 07:54:33.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:34.093: INFO: stderr: "" Jul 1 07:54:34.093: INFO: stdout: "update-demo-nautilus-f8jrh update-demo-nautilus-jtgr5 " Jul 1 07:54:34.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8jrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:34.174: INFO: stderr: "" Jul 1 07:54:34.174: INFO: stdout: "" Jul 1 07:54:34.174: INFO: update-demo-nautilus-f8jrh is created but not running Jul 1 07:54:39.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:39.379: INFO: stderr: "" Jul 1 07:54:39.379: INFO: stdout: "update-demo-nautilus-f8jrh update-demo-nautilus-jtgr5 " Jul 1 07:54:39.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8jrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:39.461: INFO: stderr: "" Jul 1 07:54:39.461: INFO: stdout: "" Jul 1 07:54:39.461: INFO: update-demo-nautilus-f8jrh is created but not running Jul 1 07:54:44.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:46.141: INFO: stderr: "" Jul 1 07:54:46.141: INFO: stdout: "update-demo-nautilus-f8jrh update-demo-nautilus-jtgr5 " Jul 1 07:54:46.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8jrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:46.788: INFO: stderr: "" Jul 1 07:54:46.788: INFO: stdout: "" Jul 1 07:54:46.788: INFO: update-demo-nautilus-f8jrh is created but not running Jul 1 07:54:51.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:53.223: INFO: stderr: "" Jul 1 07:54:53.223: INFO: stdout: "update-demo-nautilus-f8jrh update-demo-nautilus-jtgr5 " Jul 1 07:54:53.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8jrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:54.201: INFO: stderr: "" Jul 1 07:54:54.201: INFO: stdout: "" Jul 1 07:54:54.201: INFO: update-demo-nautilus-f8jrh is created but not running Jul 1 07:54:59.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:59.304: INFO: stderr: "" Jul 1 07:54:59.304: INFO: stdout: "update-demo-nautilus-f8jrh update-demo-nautilus-jtgr5 " Jul 1 07:54:59.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8jrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:59.406: INFO: stderr: "" Jul 1 07:54:59.406: INFO: stdout: "true" Jul 1 07:54:59.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8jrh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:59.497: INFO: stderr: "" Jul 1 07:54:59.497: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 07:54:59.497: INFO: validating pod update-demo-nautilus-f8jrh Jul 1 07:54:59.532: INFO: got data: { "image": "nautilus.jpg" } Jul 1 07:54:59.532: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 07:54:59.532: INFO: update-demo-nautilus-f8jrh is verified up and running Jul 1 07:54:59.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jtgr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:59.644: INFO: stderr: "" Jul 1 07:54:59.644: INFO: stdout: "true" Jul 1 07:54:59.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jtgr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:54:59.731: INFO: stderr: "" Jul 1 07:54:59.731: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 07:54:59.731: INFO: validating pod update-demo-nautilus-jtgr5 Jul 1 07:54:59.742: INFO: got data: { "image": "nautilus.jpg" } Jul 1 07:54:59.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 07:54:59.742: INFO: update-demo-nautilus-jtgr5 is verified up and running STEP: rolling-update to new replication controller Jul 1 07:54:59.744: INFO: scanned /root for discovery docs: Jul 1 07:54:59.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:55:24.715: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 1 07:55:24.715: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 07:55:24.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:55:24.804: INFO: stderr: "" Jul 1 07:55:24.804: INFO: stdout: "update-demo-kitten-dwksw update-demo-kitten-rjbhl " Jul 1 07:55:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dwksw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:55:24.883: INFO: stderr: "" Jul 1 07:55:24.883: INFO: stdout: "true" Jul 1 07:55:24.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dwksw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:55:24.974: INFO: stderr: "" Jul 1 07:55:24.974: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 1 07:55:24.974: INFO: validating pod update-demo-kitten-dwksw Jul 1 07:55:24.984: INFO: got data: { "image": "kitten.jpg" } Jul 1 07:55:24.984: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 1 07:55:24.984: INFO: update-demo-kitten-dwksw is verified up and running Jul 1 07:55:24.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rjbhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:55:25.065: INFO: stderr: "" Jul 1 07:55:25.065: INFO: stdout: "true" Jul 1 07:55:25.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rjbhl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cc5jd' Jul 1 07:55:25.151: INFO: stderr: "" Jul 1 07:55:25.151: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 1 07:55:25.151: INFO: validating pod update-demo-kitten-rjbhl Jul 1 07:55:25.158: INFO: got data: { "image": "kitten.jpg" } Jul 1 07:55:25.158: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 1 07:55:25.158: INFO: update-demo-kitten-rjbhl is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:55:25.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cc5jd" for this suite. Jul 1 07:56:03.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:56:03.266: INFO: namespace: e2e-tests-kubectl-cc5jd, resource: bindings, ignored listing per whitelist Jul 1 07:56:03.273: INFO: namespace e2e-tests-kubectl-cc5jd deletion completed in 38.111645861s • [SLOW TEST:96.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:56:03.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4f8f7fbc-bb70-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 07:56:03.402: INFO: Waiting up to 5m0s for pod "pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-n7v2q" to be "success or failure" Jul 1 07:56:03.408: INFO: Pod "pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134002ms Jul 1 07:56:05.451: INFO: Pod "pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048542401s Jul 1 07:56:07.552: INFO: Pod "pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150167991s Jul 1 07:56:09.555: INFO: Pod "pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153151048s STEP: Saw pod success Jul 1 07:56:09.555: INFO: Pod "pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:56:09.557: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 07:56:09.618: INFO: Waiting for pod pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018 to disappear Jul 1 07:56:09.636: INFO: Pod pod-secrets-4f91c8d6-bb70-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:56:09.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n7v2q" for this suite. Jul 1 07:56:15.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:56:15.686: INFO: namespace: e2e-tests-secrets-n7v2q, resource: bindings, ignored listing per whitelist Jul 1 07:56:15.821: INFO: namespace e2e-tests-secrets-n7v2q deletion completed in 6.181447896s • [SLOW TEST:12.548 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:56:15.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-djkmh/secret-test-57069ef8-bb70-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 07:56:15.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-djkmh" to be "success or failure" Jul 1 07:56:15.976: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450113ms Jul 1 07:56:17.980: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008370756s Jul 1 07:56:20.223: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251445612s Jul 1 07:56:23.546: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.574372237s Jul 1 07:56:25.633: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.661473888s Jul 1 07:56:29.931: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.959220782s Jul 1 07:56:31.935: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.963315198s Jul 1 07:56:33.939: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.967454404s Jul 1 07:56:36.014: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.04213246s Jul 1 07:56:38.523: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 22.550840144s Jul 1 07:56:40.527: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.55534923s STEP: Saw pod success Jul 1 07:56:40.527: INFO: Pod "pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:56:40.530: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018 container env-test: STEP: delete the pod Jul 1 07:56:41.844: INFO: Waiting for pod pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018 to disappear Jul 1 07:56:41.913: INFO: Pod pod-configmaps-570bf9c9-bb70-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:56:41.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-djkmh" for this suite. Jul 1 07:56:47.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:56:47.981: INFO: namespace: e2e-tests-secrets-djkmh, resource: bindings, ignored listing per whitelist Jul 1 07:56:48.046: INFO: namespace e2e-tests-secrets-djkmh deletion completed in 6.129360189s • [SLOW TEST:32.225 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:56:48.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0701 07:56:58.524592 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 07:56:58.524: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:56:58.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sxwr7" for this suite. Jul 1 07:57:13.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:57:14.514: INFO: namespace: e2e-tests-gc-sxwr7, resource: bindings, ignored listing per whitelist Jul 1 07:57:14.565: INFO: namespace e2e-tests-gc-sxwr7 deletion completed in 15.568184s • [SLOW TEST:26.519 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:57:14.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7a89fc05-bb70-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 07:57:22.052: INFO: Waiting up to 5m0s for pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-x82r8" to be "success or failure" Jul 1 07:57:22.170: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 117.899417ms Jul 1 07:57:24.570: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517484976s Jul 1 07:57:28.375: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.322761149s Jul 1 07:57:30.379: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327079726s Jul 1 07:57:32.386: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333392313s Jul 1 07:57:34.389: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.336477186s STEP: Saw pod success Jul 1 07:57:34.389: INFO: Pod "pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:57:34.391: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 07:57:34.447: INFO: Waiting for pod pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018 to disappear Jul 1 07:57:34.452: INFO: Pod pod-secrets-7e6d26ae-bb70-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:57:34.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x82r8" for this suite. Jul 1 07:57:40.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:57:40.506: INFO: namespace: e2e-tests-secrets-x82r8, resource: bindings, ignored listing per whitelist Jul 1 07:57:40.518: INFO: namespace e2e-tests-secrets-x82r8 deletion completed in 6.062768286s STEP: Destroying namespace "e2e-tests-secret-namespace-8sqfd" for this suite. Jul 1 07:57:46.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:57:46.546: INFO: namespace: e2e-tests-secret-namespace-8sqfd, resource: bindings, ignored listing per whitelist Jul 1 07:57:46.590: INFO: namespace e2e-tests-secret-namespace-8sqfd deletion completed in 6.072052118s • [SLOW TEST:32.024 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:57:46.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jul 1 07:57:46.705: INFO: Waiting up to 5m0s for pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018" in namespace "e2e-tests-containers-nbwhg" to be "success or failure" Jul 1 07:57:46.716: INFO: Pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.929102ms Jul 1 07:57:49.394: INFO: Pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688472848s Jul 1 07:57:51.398: INFO: Pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.692984747s Jul 1 07:57:53.459: INFO: Pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754272006s Jul 1 07:57:55.462: INFO: Pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.757135445s STEP: Saw pod success Jul 1 07:57:55.462: INFO: Pod "client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:57:55.740: INFO: Trying to get logs from node hunter-worker2 pod client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 07:57:55.927: INFO: Waiting for pod client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018 to disappear Jul 1 07:57:55.932: INFO: Pod client-containers-8d20ab4f-bb70-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:57:55.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nbwhg" for this suite. Jul 1 07:58:01.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:58:01.999: INFO: namespace: e2e-tests-containers-nbwhg, resource: bindings, ignored listing per whitelist Jul 1 07:58:02.030: INFO: namespace e2e-tests-containers-nbwhg deletion completed in 6.094469357s • [SLOW TEST:15.440 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:58:02.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9655d5f4-bb70-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 07:58:02.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-cgbzs" to be "success or failure" Jul 1 07:58:02.135: INFO: Pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010131ms Jul 1 07:58:04.237: INFO: Pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105528748s Jul 1 07:58:06.304: INFO: Pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172775369s Jul 1 07:58:08.356: INFO: Pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224933353s Jul 1 07:58:10.359: INFO: Pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.22785083s STEP: Saw pod success Jul 1 07:58:10.359: INFO: Pod "pod-configmaps-96572168-bb70-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 07:58:10.361: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-96572168-bb70-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 07:58:10.396: INFO: Waiting for pod pod-configmaps-96572168-bb70-11ea-a133-0242ac110018 to disappear Jul 1 07:58:10.411: INFO: Pod pod-configmaps-96572168-bb70-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:58:10.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cgbzs" for this suite. Jul 1 07:58:16.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:58:16.493: INFO: namespace: e2e-tests-configmap-cgbzs, resource: bindings, ignored listing per whitelist Jul 1 07:58:16.557: INFO: namespace e2e-tests-configmap-cgbzs deletion completed in 6.143426604s • [SLOW TEST:14.527 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:58:16.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 07:58:32.792: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 07:58:32.814: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 07:58:34.815: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 07:58:34.818: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 07:58:36.815: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 07:58:36.825: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 07:58:38.815: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 07:58:38.818: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 07:58:38.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-njwhn" for this suite. Jul 1 07:59:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 07:59:00.854: INFO: namespace: e2e-tests-container-lifecycle-hook-njwhn, resource: bindings, ignored listing per whitelist Jul 1 07:59:00.915: INFO: namespace e2e-tests-container-lifecycle-hook-njwhn deletion completed in 22.093885994s • [SLOW TEST:44.358 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 07:59:00.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bp6ft [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-bp6ft STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-bp6ft Jul 1 07:59:01.047: INFO: Found 0 stateful pods, waiting for 1 Jul 1 07:59:11.050: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 07:59:21.050: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 1 07:59:21.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 07:59:21.538: INFO: stderr: "I0701 07:59:21.290019 1019 log.go:172] (0xc0001688f0) (0xc000504aa0) Create stream\nI0701 07:59:21.290151 1019 log.go:172] (0xc0001688f0) (0xc000504aa0) Stream added, broadcasting: 1\nI0701 07:59:21.294731 1019 log.go:172] (0xc0001688f0) Reply frame received for 1\nI0701 07:59:21.294792 1019 log.go:172] (0xc0001688f0) (0xc000364c80) Create stream\nI0701 07:59:21.294817 1019 log.go:172] (0xc0001688f0) (0xc000364c80) Stream added, broadcasting: 3\nI0701 07:59:21.296847 1019 log.go:172] (0xc0001688f0) Reply frame received for 3\nI0701 07:59:21.296881 1019 log.go:172] (0xc0001688f0) (0xc0003655e0) Create stream\nI0701 07:59:21.296899 1019 log.go:172] (0xc0001688f0) (0xc0003655e0) Stream added, broadcasting: 5\nI0701 07:59:21.300399 1019 log.go:172] (0xc0001688f0) Reply frame received for 5\nI0701 07:59:21.529828 1019 log.go:172] (0xc0001688f0) Data frame received for 5\nI0701 07:59:21.529851 1019 log.go:172] (0xc0003655e0) (5) Data frame handling\nI0701 07:59:21.529878 1019 log.go:172] (0xc0001688f0) Data frame received for 3\nI0701 07:59:21.529885 1019 log.go:172] (0xc000364c80) (3) Data frame handling\nI0701 07:59:21.529893 1019 log.go:172] (0xc000364c80) (3) Data frame sent\nI0701 07:59:21.529899 1019 log.go:172] (0xc0001688f0) Data frame received for 3\nI0701 07:59:21.529905 1019 log.go:172] (0xc000364c80) (3) Data frame handling\nI0701 07:59:21.530941 1019 log.go:172] (0xc0001688f0) Data frame received for 1\nI0701 07:59:21.530989 1019 log.go:172] (0xc000504aa0) (1) Data frame handling\nI0701 07:59:21.531020 1019 log.go:172] (0xc000504aa0) (1) Data frame sent\nI0701 07:59:21.531067 1019 log.go:172] (0xc0001688f0) (0xc000504aa0) Stream removed, broadcasting: 1\nI0701 07:59:21.531107 1019 log.go:172] (0xc0001688f0) Go away received\nI0701 07:59:21.531226 1019 log.go:172] (0xc0001688f0) (0xc000504aa0) Stream removed, broadcasting: 1\nI0701 07:59:21.531276 1019 log.go:172] (0xc0001688f0) (0xc000364c80) Stream removed, broadcasting: 3\nI0701 07:59:21.531308 1019 log.go:172] (0xc0001688f0) (0xc0003655e0) Stream removed, broadcasting: 5\n" Jul 1 07:59:21.538: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 07:59:21.538: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 07:59:21.541: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 07:59:31.545: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 07:59:31.545: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 07:59:31.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999517s Jul 1 07:59:32.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.947181634s Jul 1 07:59:34.152: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.944124677s Jul 1 07:59:35.182: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.402900044s Jul 1 07:59:36.202: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.373052986s Jul 1 07:59:37.226: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.352950415s Jul 1 07:59:38.229: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.329285424s Jul 1 07:59:40.891: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.326648964s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-bp6ft Jul 1 07:59:41.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 07:59:43.028: INFO: stderr: "I0701 07:59:42.966875 1036 log.go:172] (0xc00013a790) (0xc00072a640) Create stream\nI0701 07:59:42.966961 1036 log.go:172] (0xc00013a790) (0xc00072a640) Stream added, broadcasting: 1\nI0701 07:59:42.968614 1036 log.go:172] (0xc00013a790) Reply frame received for 1\nI0701 07:59:42.968643 1036 log.go:172] (0xc00013a790) (0xc00072a6e0) Create stream\nI0701 07:59:42.968651 1036 log.go:172] (0xc00013a790) (0xc00072a6e0) Stream added, broadcasting: 3\nI0701 07:59:42.969259 1036 log.go:172] (0xc00013a790) Reply frame received for 3\nI0701 07:59:42.969287 1036 log.go:172] (0xc00013a790) (0xc00067ebe0) Create stream\nI0701 07:59:42.969298 1036 log.go:172] (0xc00013a790) (0xc00067ebe0) Stream added, broadcasting: 5\nI0701 07:59:42.969865 1036 log.go:172] (0xc00013a790) Reply frame received for 5\nI0701 07:59:43.023863 1036 log.go:172] (0xc00013a790) Data frame received for 5\nI0701 07:59:43.023882 1036 log.go:172] (0xc00067ebe0) (5) Data frame handling\nI0701 07:59:43.023909 1036 log.go:172] (0xc00013a790) Data frame received for 3\nI0701 07:59:43.023919 1036 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0701 07:59:43.023927 1036 log.go:172] (0xc00072a6e0) (3) Data frame sent\nI0701 07:59:43.023933 1036 log.go:172] (0xc00013a790) Data frame received for 3\nI0701 07:59:43.023939 1036 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0701 07:59:43.024923 1036 log.go:172] (0xc00013a790) Data frame received for 1\nI0701 07:59:43.024933 1036 log.go:172] (0xc00072a640) (1) Data frame handling\nI0701 07:59:43.024947 1036 log.go:172] (0xc00072a640) (1) Data frame sent\nI0701 07:59:43.024955 1036 log.go:172] (0xc00013a790) (0xc00072a640) Stream removed, broadcasting: 1\nI0701 07:59:43.024971 1036 log.go:172] (0xc00013a790) Go away received\nI0701 07:59:43.025229 1036 log.go:172] (0xc00013a790) (0xc00072a640) Stream removed, broadcasting: 1\nI0701 07:59:43.025243 1036 log.go:172] (0xc00013a790) (0xc00072a6e0) Stream removed, broadcasting: 3\nI0701 07:59:43.025250 1036 log.go:172] (0xc00013a790) (0xc00067ebe0) Stream removed, broadcasting: 5\n" Jul 1 07:59:43.028: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 07:59:43.028: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 07:59:43.049: INFO: Found 1 stateful pods, waiting for 3 Jul 1 07:59:53.052: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:59:53.052: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 07:59:53.052: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 08:00:03.055: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:00:03.055: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:00:03.055: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 1 08:00:03.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 08:00:03.526: INFO: stderr: "I0701 08:00:03.253604 1057 log.go:172] (0xc00071c630) (0xc000831a40) Create stream\nI0701 08:00:03.253694 1057 log.go:172] (0xc00071c630) (0xc000831a40) Stream added, broadcasting: 1\nI0701 08:00:03.259323 1057 log.go:172] (0xc00071c630) Reply frame received for 1\nI0701 08:00:03.259367 1057 log.go:172] (0xc00071c630) (0xc0005420a0) Create stream\nI0701 08:00:03.259377 1057 log.go:172] (0xc00071c630) (0xc0005420a0) Stream added, broadcasting: 3\nI0701 08:00:03.260562 1057 log.go:172] (0xc00071c630) Reply frame received for 3\nI0701 08:00:03.260577 1057 log.go:172] (0xc00071c630) (0xc000542320) Create stream\nI0701 08:00:03.260584 1057 log.go:172] (0xc00071c630) (0xc000542320) Stream added, broadcasting: 5\nI0701 08:00:03.261724 1057 log.go:172] (0xc00071c630) Reply frame received for 5\nI0701 08:00:03.495560 1057 log.go:172] (0xc00071c630) Data frame received for 3\nI0701 08:00:03.495582 1057 log.go:172] (0xc0005420a0) (3) Data frame handling\nI0701 08:00:03.495592 1057 log.go:172] (0xc0005420a0) (3) Data frame sent\nI0701 08:00:03.510803 1057 log.go:172] (0xc00071c630) Data frame received for 3\nI0701 08:00:03.510878 1057 log.go:172] (0xc0005420a0) (3) Data frame handling\nI0701 08:00:03.510922 1057 log.go:172] (0xc00071c630) Data frame received for 5\nI0701 08:00:03.510933 1057 log.go:172] (0xc000542320) (5) Data frame handling\nI0701 08:00:03.523159 1057 log.go:172] (0xc00071c630) Data frame received for 1\nI0701 08:00:03.523234 1057 log.go:172] (0xc000831a40) (1) Data frame handling\nI0701 08:00:03.523266 1057 log.go:172] (0xc000831a40) (1) Data frame sent\nI0701 08:00:03.523305 1057 log.go:172] (0xc00071c630) (0xc000831a40) Stream removed, broadcasting: 1\nI0701 08:00:03.523334 1057 log.go:172] (0xc00071c630) Go away received\nI0701 08:00:03.523581 1057 log.go:172] (0xc00071c630) (0xc000831a40) Stream removed, broadcasting: 1\nI0701 08:00:03.523593 1057 log.go:172] (0xc00071c630) (0xc0005420a0) Stream removed, broadcasting: 3\nI0701 08:00:03.523600 1057 log.go:172] (0xc00071c630) (0xc000542320) Stream removed, broadcasting: 5\n" Jul 1 08:00:03.526: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 08:00:03.526: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 08:00:03.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 08:00:04.001: INFO: stderr: "I0701 08:00:03.716680 1076 log.go:172] (0xc00015c630) (0xc0003095e0) Create stream\nI0701 08:00:03.716776 1076 log.go:172] (0xc00015c630) (0xc0003095e0) Stream added, broadcasting: 1\nI0701 08:00:03.718815 1076 log.go:172] (0xc00015c630) Reply frame received for 1\nI0701 08:00:03.718840 1076 log.go:172] (0xc00015c630) (0xc0004f8500) Create stream\nI0701 08:00:03.718848 1076 log.go:172] (0xc00015c630) (0xc0004f8500) Stream added, broadcasting: 3\nI0701 08:00:03.722105 1076 log.go:172] (0xc00015c630) Reply frame received for 3\nI0701 08:00:03.722135 1076 log.go:172] (0xc00015c630) (0xc000800000) Create stream\nI0701 08:00:03.722144 1076 log.go:172] (0xc00015c630) (0xc000800000) Stream added, broadcasting: 5\nI0701 08:00:03.725991 1076 log.go:172] (0xc00015c630) Reply frame received for 5\nI0701 08:00:03.996509 1076 log.go:172] (0xc00015c630) Data frame received for 3\nI0701 08:00:03.996545 1076 log.go:172] (0xc0004f8500) (3) Data frame handling\nI0701 08:00:03.996557 1076 log.go:172] (0xc0004f8500) (3) Data frame sent\nI0701 08:00:03.996563 1076 log.go:172] (0xc00015c630) Data frame received for 3\nI0701 08:00:03.996569 1076 log.go:172] (0xc0004f8500) (3) Data frame handling\nI0701 08:00:03.996644 1076 log.go:172] (0xc00015c630) Data frame received for 5\nI0701 08:00:03.996654 1076 log.go:172] (0xc000800000) (5) Data frame handling\nI0701 08:00:03.996671 1076 log.go:172] (0xc00015c630) Data frame received for 1\nI0701 08:00:03.996852 1076 log.go:172] (0xc0003095e0) (1) Data frame handling\nI0701 08:00:03.996902 1076 log.go:172] (0xc0003095e0) (1) Data frame sent\nI0701 08:00:03.996944 1076 log.go:172] (0xc00015c630) (0xc0003095e0) Stream removed, broadcasting: 1\nI0701 08:00:03.996995 1076 log.go:172] (0xc00015c630) Go away received\nI0701 08:00:03.997345 1076 log.go:172] (0xc00015c630) (0xc0003095e0) Stream removed, broadcasting: 1\nI0701 08:00:03.997388 1076 log.go:172] (0xc00015c630) (0xc0004f8500) Stream removed, broadcasting: 3\nI0701 08:00:03.997432 1076 log.go:172] (0xc00015c630) (0xc000800000) Stream removed, broadcasting: 5\n" Jul 1 08:00:04.001: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 08:00:04.001: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 08:00:04.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 08:00:04.597: INFO: stderr: "I0701 08:00:04.261629 1094 log.go:172] (0xc0001de4d0) (0xc00026d680) Create stream\nI0701 08:00:04.261774 1094 log.go:172] (0xc0001de4d0) (0xc00026d680) Stream added, broadcasting: 1\nI0701 08:00:04.267222 1094 log.go:172] (0xc0001de4d0) Reply frame received for 1\nI0701 08:00:04.267265 1094 log.go:172] (0xc0001de4d0) (0xc0003ea000) Create stream\nI0701 08:00:04.267274 1094 log.go:172] (0xc0001de4d0) (0xc0003ea000) Stream added, broadcasting: 3\nI0701 08:00:04.268989 1094 log.go:172] (0xc0001de4d0) Reply frame received for 3\nI0701 08:00:04.269011 1094 log.go:172] (0xc0001de4d0) (0xc000195c20) Create stream\nI0701 08:00:04.269019 1094 log.go:172] (0xc0001de4d0) (0xc000195c20) Stream added, broadcasting: 5\nI0701 08:00:04.271540 1094 log.go:172] (0xc0001de4d0) Reply frame received for 5\nI0701 08:00:04.586519 1094 log.go:172] (0xc0001de4d0) Data frame received for 5\nI0701 08:00:04.586665 1094 log.go:172] (0xc000195c20) (5) Data frame handling\nI0701 08:00:04.586710 1094 log.go:172] (0xc0001de4d0) Data frame received for 3\nI0701 08:00:04.586757 1094 log.go:172] (0xc0003ea000) (3) Data frame handling\nI0701 08:00:04.586796 1094 log.go:172] (0xc0003ea000) (3) Data frame sent\nI0701 08:00:04.586879 1094 log.go:172] (0xc0001de4d0) Data frame received for 3\nI0701 08:00:04.586914 1094 log.go:172] (0xc0003ea000) (3) Data frame handling\nI0701 08:00:04.587771 1094 log.go:172] (0xc0001de4d0) Data frame received for 1\nI0701 08:00:04.587781 1094 log.go:172] (0xc00026d680) (1) Data frame handling\nI0701 08:00:04.587787 1094 log.go:172] (0xc00026d680) (1) Data frame sent\nI0701 08:00:04.589002 1094 log.go:172] (0xc0001de4d0) (0xc00026d680) Stream removed, broadcasting: 1\nI0701 08:00:04.589332 1094 log.go:172] (0xc0001de4d0) (0xc00026d680) Stream removed, broadcasting: 1\nI0701 08:00:04.589345 1094 log.go:172] (0xc0001de4d0) (0xc0003ea000) Stream removed, broadcasting: 3\nI0701 08:00:04.590051 1094 log.go:172] (0xc0001de4d0) (0xc000195c20) Stream removed, broadcasting: 5\n" Jul 1 08:00:04.597: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 08:00:04.597: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 08:00:04.597: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 08:00:04.636: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 1 08:00:14.644: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 08:00:14.644: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 08:00:14.644: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 08:00:14.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999326s Jul 1 08:00:15.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.963213379s Jul 1 08:00:16.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.958310237s Jul 1 08:00:17.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.925427362s Jul 1 08:00:18.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.921025908s Jul 1 08:00:19.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.860994623s Jul 1 08:00:20.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.856953868s Jul 1 08:00:21.805: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.853283958s Jul 1 08:00:22.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.847699039s Jul 1 08:00:23.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 844.180299ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-bp6ft Jul 1 08:00:24.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 08:00:25.285: INFO: stderr: "I0701 08:00:25.061869 1110 log.go:172] (0xc00014a630) (0xc000805860) Create stream\nI0701 08:00:25.062051 1110 log.go:172] (0xc00014a630) (0xc000805860) Stream added, broadcasting: 1\nI0701 08:00:25.067746 1110 log.go:172] (0xc00014a630) Reply frame received for 1\nI0701 08:00:25.067798 1110 log.go:172] (0xc00014a630) (0xc00090c000) Create stream\nI0701 08:00:25.067823 1110 log.go:172] (0xc00014a630) (0xc00090c000) Stream added, broadcasting: 3\nI0701 08:00:25.072070 1110 log.go:172] (0xc00014a630) Reply frame received for 3\nI0701 08:00:25.072095 1110 log.go:172] (0xc00014a630) (0xc0002cbd60) Create stream\nI0701 08:00:25.072104 1110 log.go:172] (0xc00014a630) (0xc0002cbd60) Stream added, broadcasting: 5\nI0701 08:00:25.073301 1110 log.go:172] (0xc00014a630) Reply frame received for 5\nI0701 08:00:25.277661 1110 log.go:172] (0xc00014a630) Data frame received for 5\nI0701 08:00:25.277681 1110 log.go:172] (0xc0002cbd60) (5) Data frame handling\nI0701 08:00:25.277704 1110 log.go:172] (0xc00014a630) Data frame received for 3\nI0701 08:00:25.277710 1110 log.go:172] (0xc00090c000) (3) Data frame handling\nI0701 08:00:25.277717 1110 log.go:172] (0xc00090c000) (3) Data frame sent\nI0701 08:00:25.277722 1110 log.go:172] (0xc00014a630) Data frame received for 3\nI0701 08:00:25.277727 1110 log.go:172] (0xc00090c000) (3) Data frame handling\nI0701 08:00:25.278992 1110 log.go:172] (0xc00014a630) Data frame received for 1\nI0701 08:00:25.279004 1110 log.go:172] (0xc000805860) (1) Data frame handling\nI0701 08:00:25.279011 1110 log.go:172] (0xc000805860) (1) Data frame sent\nI0701 08:00:25.279018 1110 log.go:172] (0xc00014a630) (0xc000805860) Stream removed, broadcasting: 1\nI0701 08:00:25.279137 1110 log.go:172] (0xc00014a630) (0xc000805860) Stream removed, broadcasting: 1\nI0701 08:00:25.279147 1110 log.go:172] (0xc00014a630) (0xc00090c000) Stream removed, broadcasting: 3\nI0701 08:00:25.279153 1110 log.go:172] (0xc00014a630) (0xc0002cbd60) Stream removed, broadcasting: 5\n" Jul 1 08:00:25.285: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 08:00:25.285: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 08:00:25.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 08:00:25.685: INFO: stderr: "I0701 08:00:25.500526 1129 log.go:172] (0xc00014a840) (0xc000546960) Create stream\nI0701 08:00:25.500584 1129 log.go:172] (0xc00014a840) (0xc000546960) Stream added, broadcasting: 1\nI0701 08:00:25.505648 1129 log.go:172] (0xc00014a840) Reply frame received for 1\nI0701 08:00:25.505681 1129 log.go:172] (0xc00014a840) (0xc000546000) Create stream\nI0701 08:00:25.505697 1129 log.go:172] (0xc00014a840) (0xc000546000) Stream added, broadcasting: 3\nI0701 08:00:25.506723 1129 log.go:172] (0xc00014a840) Reply frame received for 3\nI0701 08:00:25.506738 1129 log.go:172] (0xc00014a840) (0xc0005460a0) Create stream\nI0701 08:00:25.506744 1129 log.go:172] (0xc00014a840) (0xc0005460a0) Stream added, broadcasting: 5\nI0701 08:00:25.507570 1129 log.go:172] (0xc00014a840) Reply frame received for 5\nI0701 08:00:25.678704 1129 log.go:172] (0xc00014a840) Data frame received for 5\nI0701 08:00:25.678739 1129 log.go:172] (0xc0005460a0) (5) Data frame handling\nI0701 08:00:25.678767 1129 log.go:172] (0xc00014a840) Data frame received for 3\nI0701 08:00:25.678776 1129 log.go:172] (0xc000546000) (3) Data frame handling\nI0701 08:00:25.678784 1129 log.go:172] (0xc000546000) (3) Data frame sent\nI0701 08:00:25.678791 1129 log.go:172] (0xc00014a840) Data frame received for 3\nI0701 08:00:25.678796 1129 log.go:172] (0xc000546000) (3) Data frame handling\nI0701 08:00:25.680657 1129 log.go:172] (0xc00014a840) Data frame received for 1\nI0701 08:00:25.680669 1129 log.go:172] (0xc000546960) (1) Data frame handling\nI0701 08:00:25.680678 1129 log.go:172] (0xc000546960) (1) Data frame sent\nI0701 08:00:25.680688 1129 log.go:172] (0xc00014a840) (0xc000546960) Stream removed, broadcasting: 1\nI0701 08:00:25.680818 1129 log.go:172] (0xc00014a840) (0xc000546960) Stream removed, broadcasting: 1\nI0701 08:00:25.680828 1129 log.go:172] (0xc00014a840) (0xc000546000) Stream removed, broadcasting: 3\nI0701 08:00:25.680835 1129 log.go:172] (0xc00014a840) (0xc0005460a0) Stream removed, broadcasting: 5\n" Jul 1 08:00:25.685: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 08:00:25.685: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 08:00:25.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp6ft ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 08:00:26.112: INFO: stderr: "I0701 08:00:25.898509 1149 log.go:172] (0xc00059e2c0) (0xc00064d5e0) Create stream\nI0701 08:00:25.898624 1149 log.go:172] (0xc00059e2c0) (0xc00064d5e0) Stream added, broadcasting: 1\nI0701 08:00:25.901841 1149 log.go:172] (0xc00059e2c0) Reply frame received for 1\nI0701 08:00:25.901871 1149 log.go:172] (0xc00059e2c0) (0xc000898000) Create stream\nI0701 08:00:25.901882 1149 log.go:172] (0xc00059e2c0) (0xc000898000) Stream added, broadcasting: 3\nI0701 08:00:25.903316 1149 log.go:172] (0xc00059e2c0) Reply frame received for 3\nI0701 08:00:25.903371 1149 log.go:172] (0xc00059e2c0) (0xc00064d680) Create stream\nI0701 08:00:25.903383 1149 log.go:172] (0xc00059e2c0) (0xc00064d680) Stream added, broadcasting: 5\nI0701 08:00:25.904858 1149 log.go:172] (0xc00059e2c0) Reply frame received for 5\nI0701 08:00:26.102782 1149 log.go:172] (0xc00059e2c0) Data frame received for 5\nI0701 08:00:26.102818 1149 log.go:172] (0xc00059e2c0) Data frame received for 3\nI0701 08:00:26.102848 1149 log.go:172] (0xc000898000) (3) Data frame handling\nI0701 08:00:26.102857 1149 log.go:172] (0xc000898000) (3) Data frame sent\nI0701 08:00:26.102876 1149 log.go:172] (0xc00064d680) (5) Data frame handling\nI0701 08:00:26.103420 1149 log.go:172] (0xc00059e2c0) Data frame received for 3\nI0701 08:00:26.103429 1149 log.go:172] (0xc000898000) (3) Data frame handling\nI0701 08:00:26.104078 1149 log.go:172] (0xc00059e2c0) Data frame received for 1\nI0701 08:00:26.104089 1149 log.go:172] (0xc00064d5e0) (1) Data frame handling\nI0701 08:00:26.104096 1149 log.go:172] (0xc00064d5e0) (1) Data frame sent\nI0701 08:00:26.104178 1149 log.go:172] (0xc00059e2c0) (0xc00064d5e0) Stream removed, broadcasting: 1\nI0701 08:00:26.104295 1149 log.go:172] (0xc00059e2c0) (0xc00064d5e0) Stream removed, broadcasting: 1\nI0701 08:00:26.104302 1149 log.go:172] (0xc00059e2c0) (0xc000898000) Stream removed, broadcasting: 3\nI0701 08:00:26.104402 1149 log.go:172] (0xc00059e2c0) (0xc00064d680) Stream removed, broadcasting: 5\n" Jul 1 08:00:26.112: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 08:00:26.112: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 08:00:26.112: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 08:01:06.146: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bp6ft Jul 1 08:01:06.149: INFO: Scaling statefulset ss to 0 Jul 1 08:01:06.158: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 08:01:06.160: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:01:06.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bp6ft" for this suite. Jul 1 08:01:14.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:01:14.272: INFO: namespace: e2e-tests-statefulset-bp6ft, resource: bindings, ignored listing per whitelist Jul 1 08:01:14.290: INFO: namespace e2e-tests-statefulset-bp6ft deletion completed in 8.105807667s • [SLOW TEST:133.374 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:01:14.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-08f11190-bb71-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 08:01:14.473: INFO: Waiting up to 5m0s for pod "pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-dh4fs" to be "success or failure" Jul 1 08:01:14.492: INFO: Pod "pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.510172ms Jul 1 08:01:16.501: INFO: Pod "pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028299786s Jul 1 08:01:18.507: INFO: Pod "pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033817462s Jul 1 08:01:20.510: INFO: Pod "pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036588621s STEP: Saw pod success Jul 1 08:01:20.510: INFO: Pod "pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:01:20.513: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018 container secret-env-test: STEP: delete the pod Jul 1 08:01:20.601: INFO: Waiting for pod pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:01:20.625: INFO: Pod pod-secrets-08fb7304-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:01:20.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dh4fs" for this suite. Jul 1 08:01:28.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:01:28.874: INFO: namespace: e2e-tests-secrets-dh4fs, resource: bindings, ignored listing per whitelist Jul 1 08:01:28.879: INFO: namespace e2e-tests-secrets-dh4fs deletion completed in 8.250678222s • [SLOW TEST:14.589 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:01:28.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:01:34.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-546q5" for this suite. Jul 1 08:01:40.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:01:40.930: INFO: namespace: e2e-tests-emptydir-wrapper-546q5, resource: bindings, ignored listing per whitelist Jul 1 08:01:40.997: INFO: namespace e2e-tests-emptydir-wrapper-546q5 deletion completed in 6.318053037s • [SLOW TEST:12.117 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:01:40.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jul 1 08:01:41.090: INFO: Waiting up to 5m0s for pod "var-expansion-18d935cc-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-var-expansion-w4hrm" to be "success or failure" Jul 1 08:01:41.094: INFO: Pod "var-expansion-18d935cc-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128985ms Jul 1 08:01:43.156: INFO: Pod "var-expansion-18d935cc-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066096489s Jul 1 08:01:45.199: INFO: Pod "var-expansion-18d935cc-bb71-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.108367466s Jul 1 08:01:47.202: INFO: Pod "var-expansion-18d935cc-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111487618s STEP: Saw pod success Jul 1 08:01:47.202: INFO: Pod "var-expansion-18d935cc-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:01:47.204: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-18d935cc-bb71-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 08:01:47.265: INFO: Waiting for pod var-expansion-18d935cc-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:01:47.272: INFO: Pod var-expansion-18d935cc-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:01:47.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-w4hrm" for this suite. Jul 1 08:01:53.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:01:53.331: INFO: namespace: e2e-tests-var-expansion-w4hrm, resource: bindings, ignored listing per whitelist Jul 1 08:01:53.389: INFO: namespace e2e-tests-var-expansion-w4hrm deletion completed in 6.113961162s • [SLOW TEST:12.392 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:01:53.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 1 08:01:53.587: INFO: Waiting up to 5m0s for pod "pod-20484e1e-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-wqqf8" to be "success or failure" Jul 1 08:01:53.591: INFO: Pod "pod-20484e1e-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.389851ms Jul 1 08:01:55.627: INFO: Pod "pod-20484e1e-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039827643s Jul 1 08:01:57.632: INFO: Pod "pod-20484e1e-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044220678s Jul 1 08:01:59.636: INFO: Pod "pod-20484e1e-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049053968s STEP: Saw pod success Jul 1 08:01:59.636: INFO: Pod "pod-20484e1e-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:01:59.640: INFO: Trying to get logs from node hunter-worker2 pod pod-20484e1e-bb71-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:01:59.664: INFO: Waiting for pod pod-20484e1e-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:01:59.668: INFO: Pod pod-20484e1e-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:01:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wqqf8" for this suite. Jul 1 08:02:07.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:02:07.704: INFO: namespace: e2e-tests-emptydir-wqqf8, resource: bindings, ignored listing per whitelist Jul 1 08:02:07.756: INFO: namespace e2e-tests-emptydir-wqqf8 deletion completed in 8.084938048s • [SLOW TEST:14.367 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:02:07.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 1 08:02:07.872: INFO: Waiting up to 5m0s for pod "pod-28ca4ff4-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-wfqjt" to be "success or failure" Jul 1 08:02:07.879: INFO: Pod "pod-28ca4ff4-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.582092ms Jul 1 08:02:09.884: INFO: Pod "pod-28ca4ff4-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01220119s Jul 1 08:02:11.888: INFO: Pod "pod-28ca4ff4-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016757554s STEP: Saw pod success Jul 1 08:02:11.888: INFO: Pod "pod-28ca4ff4-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:02:11.892: INFO: Trying to get logs from node hunter-worker2 pod pod-28ca4ff4-bb71-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:02:11.911: INFO: Waiting for pod pod-28ca4ff4-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:02:11.915: INFO: Pod pod-28ca4ff4-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:02:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wfqjt" for this suite. Jul 1 08:02:19.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:02:19.955: INFO: namespace: e2e-tests-emptydir-wfqjt, resource: bindings, ignored listing per whitelist Jul 1 08:02:20.046: INFO: namespace e2e-tests-emptydir-wfqjt deletion completed in 8.127561939s • [SLOW TEST:12.289 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:02:20.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:02:27.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pb86s" for this suite. Jul 1 08:02:49.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:02:49.387: INFO: namespace: e2e-tests-replication-controller-pb86s, resource: bindings, ignored listing per whitelist Jul 1 08:02:49.438: INFO: namespace e2e-tests-replication-controller-pb86s deletion completed in 22.095168597s • [SLOW TEST:29.392 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:02:49.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8mq5k [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jul 1 08:02:50.617: INFO: Found 0 stateful pods, waiting for 3 Jul 1 08:03:00.648: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:03:00.648: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:03:00.648: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 08:03:10.623: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:03:10.623: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:03:10.623: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 1 08:03:10.651: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 1 08:03:20.740: INFO: Updating stateful set ss2 Jul 1 08:03:20.753: INFO: Waiting for Pod e2e-tests-statefulset-8mq5k/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jul 1 08:03:30.844: INFO: Found 2 stateful pods, waiting for 3 Jul 1 08:03:40.849: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:03:40.849: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 08:03:40.849: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 1 08:03:40.872: INFO: Updating stateful set ss2 Jul 1 08:03:40.886: INFO: Waiting for Pod e2e-tests-statefulset-8mq5k/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 1 08:03:50.911: INFO: Updating stateful set ss2 Jul 1 08:03:50.952: INFO: Waiting for StatefulSet e2e-tests-statefulset-8mq5k/ss2 to complete update Jul 1 08:03:50.952: INFO: Waiting for Pod e2e-tests-statefulset-8mq5k/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 08:04:00.961: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8mq5k Jul 1 08:04:00.964: INFO: Scaling statefulset ss2 to 0 Jul 1 08:04:30.995: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 08:04:30.999: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:04:31.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8mq5k" for this suite. Jul 1 08:04:39.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:04:39.081: INFO: namespace: e2e-tests-statefulset-8mq5k, resource: bindings, ignored listing per whitelist Jul 1 08:04:39.149: INFO: namespace e2e-tests-statefulset-8mq5k deletion completed in 8.098627154s • [SLOW TEST:109.711 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:04:39.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 1 08:04:39.323: INFO: Waiting up to 5m0s for pod "pod-831219a4-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-w556c" to be "success or failure" Jul 1 08:04:39.405: INFO: Pod "pod-831219a4-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 81.967165ms Jul 1 08:04:41.409: INFO: Pod "pod-831219a4-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085983121s Jul 1 08:04:43.495: INFO: Pod "pod-831219a4-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171990076s STEP: Saw pod success Jul 1 08:04:43.495: INFO: Pod "pod-831219a4-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:04:43.498: INFO: Trying to get logs from node hunter-worker2 pod pod-831219a4-bb71-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:04:43.558: INFO: Waiting for pod pod-831219a4-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:04:43.571: INFO: Pod pod-831219a4-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:04:43.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-w556c" for this suite. Jul 1 08:04:51.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:04:51.605: INFO: namespace: e2e-tests-emptydir-w556c, resource: bindings, ignored listing per whitelist Jul 1 08:04:51.670: INFO: namespace e2e-tests-emptydir-w556c deletion completed in 8.095757531s • [SLOW TEST:12.520 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:04:51.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8a821b25-bb71-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:04:51.795: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-wcr8k" to be "success or failure" Jul 1 08:04:51.836: INFO: Pod "pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 40.892235ms Jul 1 08:04:54.609: INFO: Pod "pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.814568567s Jul 1 08:04:56.614: INFO: Pod "pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.818753858s STEP: Saw pod success Jul 1 08:04:56.614: INFO: Pod "pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:04:56.617: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 08:04:56.644: INFO: Waiting for pod pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:04:56.668: INFO: Pod pod-projected-configmaps-8a84c8ae-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:04:56.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wcr8k" for this suite. Jul 1 08:05:02.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:05:02.779: INFO: namespace: e2e-tests-projected-wcr8k, resource: bindings, ignored listing per whitelist Jul 1 08:05:02.796: INFO: namespace e2e-tests-projected-wcr8k deletion completed in 6.12438386s • [SLOW TEST:11.125 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:05:02.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 1 08:05:10.139: INFO: 2 pods remaining Jul 1 08:05:10.139: INFO: 0 pods has nil DeletionTimestamp Jul 1 08:05:10.139: INFO: Jul 1 08:05:11.825: INFO: 0 pods remaining Jul 1 08:05:11.825: INFO: 0 pods has nil DeletionTimestamp Jul 1 08:05:11.825: INFO: STEP: Gathering metrics W0701 08:05:11.986515 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 08:05:11.986: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:05:11.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wr8rb" for this suite. Jul 1 08:05:18.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:05:18.050: INFO: namespace: e2e-tests-gc-wr8rb, resource: bindings, ignored listing per whitelist Jul 1 08:05:18.074: INFO: namespace e2e-tests-gc-wr8rb deletion completed in 6.083597418s • [SLOW TEST:15.277 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:05:18.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-9a3e8c46-bb71-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:05:18.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-wmrgk" to be "success or failure" Jul 1 08:05:18.263: INFO: Pod "pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108746ms Jul 1 08:05:20.813: INFO: Pod "pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566423428s Jul 1 08:05:22.819: INFO: Pod "pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571835053s Jul 1 08:05:24.822: INFO: Pod "pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.575428187s STEP: Saw pod success Jul 1 08:05:24.822: INFO: Pod "pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:05:24.824: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 08:05:24.873: INFO: Waiting for pod pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:05:24.902: INFO: Pod pod-projected-configmaps-9a45189d-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:05:24.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wmrgk" for this suite. Jul 1 08:05:30.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:05:30.962: INFO: namespace: e2e-tests-projected-wmrgk, resource: bindings, ignored listing per whitelist Jul 1 08:05:31.005: INFO: namespace e2e-tests-projected-wmrgk deletion completed in 6.100529409s • [SLOW TEST:12.931 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:05:31.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 08:05:31.144: INFO: Waiting up to 5m0s for pod "pod-a1f56412-bb71-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-j4ttl" to be "success or failure" Jul 1 08:05:31.153: INFO: Pod "pod-a1f56412-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.545305ms Jul 1 08:05:33.157: INFO: Pod "pod-a1f56412-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013015196s Jul 1 08:05:35.161: INFO: Pod "pod-a1f56412-bb71-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017360459s Jul 1 08:05:37.166: INFO: Pod "pod-a1f56412-bb71-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021833006s STEP: Saw pod success Jul 1 08:05:37.166: INFO: Pod "pod-a1f56412-bb71-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:05:37.169: INFO: Trying to get logs from node hunter-worker2 pod pod-a1f56412-bb71-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:05:37.206: INFO: Waiting for pod pod-a1f56412-bb71-11ea-a133-0242ac110018 to disappear Jul 1 08:05:37.220: INFO: Pod pod-a1f56412-bb71-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:05:37.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j4ttl" for this suite. Jul 1 08:05:43.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:05:43.360: INFO: namespace: e2e-tests-emptydir-j4ttl, resource: bindings, ignored listing per whitelist Jul 1 08:05:43.371: INFO: namespace e2e-tests-emptydir-j4ttl deletion completed in 6.145032004s • [SLOW TEST:12.366 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:05:43.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jul 1 08:05:44.005: INFO: Waiting up to 5m0s for pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s" in namespace "e2e-tests-svcaccounts-m8k2r" to be "success or failure" Jul 1 08:05:44.064: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s": Phase="Pending", Reason="", readiness=false. Elapsed: 59.394152ms Jul 1 08:05:46.070: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064999585s Jul 1 08:05:48.075: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069475892s Jul 1 08:05:50.079: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07402316s Jul 1 08:05:52.083: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078427141s STEP: Saw pod success Jul 1 08:05:52.084: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s" satisfied condition "success or failure" Jul 1 08:05:52.087: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s container token-test: STEP: delete the pod Jul 1 08:05:52.127: INFO: Waiting for pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s to disappear Jul 1 08:05:52.138: INFO: Pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-wxc2s no longer exists STEP: Creating a pod to test consume service account root CA Jul 1 08:05:52.142: INFO: Waiting up to 5m0s for pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6" in namespace "e2e-tests-svcaccounts-m8k2r" to be "success or failure" Jul 1 08:05:52.160: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.480191ms Jul 1 08:05:54.165: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02261632s Jul 1 08:05:56.169: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026976681s Jul 1 08:05:58.172: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030161603s Jul 1 08:06:00.176: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034281102s STEP: Saw pod success Jul 1 08:06:00.176: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6" satisfied condition "success or failure" Jul 1 08:06:00.179: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6 container root-ca-test: STEP: delete the pod Jul 1 08:06:00.211: INFO: Waiting for pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6 to disappear Jul 1 08:06:00.221: INFO: Pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-qdrn6 no longer exists STEP: Creating a pod to test consume service account namespace Jul 1 08:06:00.224: INFO: Waiting up to 5m0s for pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np" in namespace "e2e-tests-svcaccounts-m8k2r" to be "success or failure" Jul 1 08:06:00.227: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680033ms Jul 1 08:06:02.231: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00699387s Jul 1 08:06:04.236: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011546616s Jul 1 08:06:06.240: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015849779s Jul 1 08:06:08.244: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019878956s STEP: Saw pod success Jul 1 08:06:08.244: INFO: Pod "pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np" satisfied condition "success or failure" Jul 1 08:06:08.247: INFO: Trying to get logs from node hunter-worker pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np container namespace-test: STEP: delete the pod Jul 1 08:06:08.287: INFO: Waiting for pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np to disappear Jul 1 08:06:08.343: INFO: Pod pod-service-account-a9a3bf38-bb71-11ea-a133-0242ac110018-244np no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:06:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-m8k2r" for this suite. Jul 1 08:06:16.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:06:16.424: INFO: namespace: e2e-tests-svcaccounts-m8k2r, resource: bindings, ignored listing per whitelist Jul 1 08:06:16.458: INFO: namespace e2e-tests-svcaccounts-m8k2r deletion completed in 8.110231643s • [SLOW TEST:33.087 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:06:16.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 1 08:06:17.200: INFO: Pod name wrapped-volume-race-bd64e247-bb71-11ea-a133-0242ac110018: Found 0 pods out of 5 Jul 1 08:06:22.209: INFO: Pod name wrapped-volume-race-bd64e247-bb71-11ea-a133-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bd64e247-bb71-11ea-a133-0242ac110018 in namespace e2e-tests-emptydir-wrapper-hx86h, will wait for the garbage collector to delete the pods Jul 1 08:08:14.291: INFO: Deleting ReplicationController wrapped-volume-race-bd64e247-bb71-11ea-a133-0242ac110018 took: 7.280153ms Jul 1 08:08:14.392: INFO: Terminating ReplicationController wrapped-volume-race-bd64e247-bb71-11ea-a133-0242ac110018 pods took: 100.240794ms STEP: Creating RC which spawns configmap-volume pods Jul 1 08:08:52.377: INFO: Pod name wrapped-volume-race-19dee1a3-bb72-11ea-a133-0242ac110018: Found 0 pods out of 5 Jul 1 08:08:57.383: INFO: Pod name wrapped-volume-race-19dee1a3-bb72-11ea-a133-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-19dee1a3-bb72-11ea-a133-0242ac110018 in namespace e2e-tests-emptydir-wrapper-hx86h, will wait for the garbage collector to delete the pods Jul 1 08:11:03.479: INFO: Deleting ReplicationController wrapped-volume-race-19dee1a3-bb72-11ea-a133-0242ac110018 took: 7.971096ms Jul 1 08:11:03.579: INFO: Terminating ReplicationController wrapped-volume-race-19dee1a3-bb72-11ea-a133-0242ac110018 pods took: 100.245885ms STEP: Creating RC which spawns configmap-volume pods Jul 1 08:11:41.755: INFO: Pod name wrapped-volume-race-7ed550b2-bb72-11ea-a133-0242ac110018: Found 0 pods out of 5 Jul 1 08:11:46.763: INFO: Pod name wrapped-volume-race-7ed550b2-bb72-11ea-a133-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7ed550b2-bb72-11ea-a133-0242ac110018 in namespace e2e-tests-emptydir-wrapper-hx86h, will wait for the garbage collector to delete the pods Jul 1 08:14:22.856: INFO: Deleting ReplicationController wrapped-volume-race-7ed550b2-bb72-11ea-a133-0242ac110018 took: 7.859872ms Jul 1 08:14:23.056: INFO: Terminating ReplicationController wrapped-volume-race-7ed550b2-bb72-11ea-a133-0242ac110018 pods took: 200.302856ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:15:02.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-hx86h" for this suite. Jul 1 08:15:10.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:15:10.817: INFO: namespace: e2e-tests-emptydir-wrapper-hx86h, resource: bindings, ignored listing per whitelist Jul 1 08:15:10.887: INFO: namespace e2e-tests-emptydir-wrapper-hx86h deletion completed in 8.088733873s • [SLOW TEST:534.429 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:15:10.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:15:10.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-t7d7b" to be "success or failure" Jul 1 08:15:11.002: INFO: Pod "downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902627ms Jul 1 08:15:13.006: INFO: Pod "downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008789639s Jul 1 08:15:15.011: INFO: Pod "downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012953086s STEP: Saw pod success Jul 1 08:15:15.011: INFO: Pod "downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:15:15.014: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:15:15.044: INFO: Waiting for pod downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018 to disappear Jul 1 08:15:15.091: INFO: Pod downwardapi-volume-fb9676f4-bb72-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:15:15.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t7d7b" for this suite. Jul 1 08:15:21.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:15:21.174: INFO: namespace: e2e-tests-downward-api-t7d7b, resource: bindings, ignored listing per whitelist Jul 1 08:15:21.198: INFO: namespace e2e-tests-downward-api-t7d7b deletion completed in 6.10334832s • [SLOW TEST:10.311 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:15:21.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-4l84s in namespace e2e-tests-proxy-5k29v I0701 08:15:21.412035 6 runners.go:184] Created replication controller with name: proxy-service-4l84s, namespace: e2e-tests-proxy-5k29v, replica count: 1 I0701 08:15:22.462438 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 08:15:23.462658 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 08:15:24.462876 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 08:15:25.463111 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 08:15:26.463331 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 08:15:27.463535 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 08:15:28.463788 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 08:15:29.463967 6 runners.go:184] proxy-service-4l84s Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 08:15:29.466: INFO: setup took 8.144756964s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 1 08:15:29.472: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5k29v/pods/proxy-service-4l84s-srtp4:162/proxy/: bar (200; 5.146923ms) Jul 1 08:15:29.472: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5k29v/pods/http:proxy-service-4l84s-srtp4:162/proxy/: bar (200; 5.314995ms) Jul 1 08:15:29.472: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5k29v/pods/proxy-service-4l84s-srtp4:160/proxy/: foo (200; 5.385103ms) Jul 1 08:15:29.473: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5k29v/pods/http:proxy-service-4l84s-srtp4:160/proxy/: foo (200; 6.752502ms) Jul 1 08:15:29.473: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5k29v/services/http:proxy-service-4l84s:portname1/proxy/: foo (200; 6.937526ms) Jul 1 08:15:29.473: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5k29v/pods/http:proxy-service-4l84s-srtp4:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:15:48.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-bchn9" to be "success or failure" Jul 1 08:15:48.081: INFO: Pod "downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.450158ms Jul 1 08:15:50.119: INFO: Pod "downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051441354s Jul 1 08:15:52.123: INFO: Pod "downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055525939s STEP: Saw pod success Jul 1 08:15:52.123: INFO: Pod "downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:15:52.126: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:15:52.249: INFO: Waiting for pod downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:15:52.291: INFO: Pod downwardapi-volume-11ad5c45-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:15:52.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bchn9" for this suite. Jul 1 08:15:58.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:15:58.414: INFO: namespace: e2e-tests-projected-bchn9, resource: bindings, ignored listing per whitelist Jul 1 08:15:58.432: INFO: namespace e2e-tests-projected-bchn9 deletion completed in 6.136752717s • [SLOW TEST:10.509 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:15:58.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jul 1 08:15:58.538: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-xsjpl" to be "success or failure" Jul 1 08:15:58.567: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 29.091577ms Jul 1 08:16:00.573: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034775209s Jul 1 08:16:02.577: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038710296s Jul 1 08:16:04.589: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050765087s STEP: Saw pod success Jul 1 08:16:04.589: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 1 08:16:04.592: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 1 08:16:04.622: INFO: Waiting for pod pod-host-path-test to disappear Jul 1 08:16:04.650: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:16:04.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-xsjpl" for this suite. Jul 1 08:16:10.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:16:10.706: INFO: namespace: e2e-tests-hostpath-xsjpl, resource: bindings, ignored listing per whitelist Jul 1 08:16:10.739: INFO: namespace e2e-tests-hostpath-xsjpl deletion completed in 6.085197586s • [SLOW TEST:12.307 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:16:10.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-1f471a93-bb73-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:16:10.880: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-sdn9s" to be "success or failure" Jul 1 08:16:10.883: INFO: Pod "pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.545966ms Jul 1 08:16:12.896: INFO: Pod "pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016250649s Jul 1 08:16:14.906: INFO: Pod "pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026213623s STEP: Saw pod success Jul 1 08:16:14.906: INFO: Pod "pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:16:14.908: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 08:16:14.948: INFO: Waiting for pod pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:16:14.962: INFO: Pod pod-projected-configmaps-1f48d724-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:16:14.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sdn9s" for this suite. Jul 1 08:16:20.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:16:21.025: INFO: namespace: e2e-tests-projected-sdn9s, resource: bindings, ignored listing per whitelist Jul 1 08:16:21.126: INFO: namespace e2e-tests-projected-sdn9s deletion completed in 6.16040366s • [SLOW TEST:10.387 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:16:21.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-clhxc Jul 1 08:16:27.401: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-clhxc STEP: checking the pod's current state and verifying that restartCount is present Jul 1 08:16:27.404: INFO: Initial restart count of pod liveness-http is 0 Jul 1 08:16:49.455: INFO: Restart count of pod e2e-tests-container-probe-clhxc/liveness-http is now 1 (22.050401325s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:16:49.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-clhxc" for this suite. Jul 1 08:16:55.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:16:55.612: INFO: namespace: e2e-tests-container-probe-clhxc, resource: bindings, ignored listing per whitelist Jul 1 08:16:55.676: INFO: namespace e2e-tests-container-probe-clhxc deletion completed in 6.119492229s • [SLOW TEST:34.549 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:16:55.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:16:55.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-7zk7g" to be "success or failure" Jul 1 08:16:55.819: INFO: Pod "downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.375416ms Jul 1 08:16:57.824: INFO: Pod "downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008312702s Jul 1 08:16:59.828: INFO: Pod "downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012195283s STEP: Saw pod success Jul 1 08:16:59.828: INFO: Pod "downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:16:59.831: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:16:59.850: INFO: Waiting for pod downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:16:59.869: INFO: Pod downwardapi-volume-3a103a3b-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:16:59.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7zk7g" for this suite. Jul 1 08:17:05.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:17:05.958: INFO: namespace: e2e-tests-downward-api-7zk7g, resource: bindings, ignored listing per whitelist Jul 1 08:17:05.992: INFO: namespace e2e-tests-downward-api-7zk7g deletion completed in 6.118665929s • [SLOW TEST:10.316 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:17:05.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:17:06.085: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:17:10.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9b58z" for this suite. Jul 1 08:17:54.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:17:54.367: INFO: namespace: e2e-tests-pods-9b58z, resource: bindings, ignored listing per whitelist Jul 1 08:17:54.389: INFO: namespace e2e-tests-pods-9b58z deletion completed in 44.091598853s • [SLOW TEST:48.397 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:17:54.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:17:54.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-htmcd" to be "success or failure" Jul 1 08:17:54.622: INFO: Pod "downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 86.16772ms Jul 1 08:17:56.626: INFO: Pod "downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090694584s Jul 1 08:17:58.630: INFO: Pod "downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.094836513s Jul 1 08:18:00.634: INFO: Pod "downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098240789s STEP: Saw pod success Jul 1 08:18:00.634: INFO: Pod "downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:18:00.636: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:18:00.674: INFO: Waiting for pod downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:18:00.689: INFO: Pod downwardapi-volume-5d0ee34f-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:18:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-htmcd" for this suite. Jul 1 08:18:06.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:18:06.735: INFO: namespace: e2e-tests-projected-htmcd, resource: bindings, ignored listing per whitelist Jul 1 08:18:06.787: INFO: namespace e2e-tests-projected-htmcd deletion completed in 6.094357868s • [SLOW TEST:12.397 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:18:06.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:18:06.916: INFO: Waiting up to 5m0s for pod "downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-l4cwp" to be "success or failure" Jul 1 08:18:06.923: INFO: Pod "downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.252878ms Jul 1 08:18:08.927: INFO: Pod "downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010979278s Jul 1 08:18:10.931: INFO: Pod "downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.015235235s Jul 1 08:18:12.936: INFO: Pod "downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019671497s STEP: Saw pod success Jul 1 08:18:12.936: INFO: Pod "downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:18:12.939: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:18:12.977: INFO: Waiting for pod downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:18:13.009: INFO: Pod downwardapi-volume-646ee8ec-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:18:13.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l4cwp" for this suite. Jul 1 08:18:19.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:18:19.130: INFO: namespace: e2e-tests-projected-l4cwp, resource: bindings, ignored listing per whitelist Jul 1 08:18:19.145: INFO: namespace e2e-tests-projected-l4cwp deletion completed in 6.132409302s • [SLOW TEST:12.359 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:18:19.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-77gw7 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-77gw7 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-77gw7 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-77gw7 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-77gw7 Jul 1 08:18:23.401: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-77gw7, name: ss-0, uid: 6d2c6c5f-bb73-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Jul 1 08:18:31.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-77gw7, name: ss-0, uid: 6d2c6c5f-bb73-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 08:18:31.352: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-77gw7, name: ss-0, uid: 6d2c6c5f-bb73-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 08:18:31.535: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-77gw7 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-77gw7 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-77gw7 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 08:18:35.807: INFO: Deleting all statefulset in ns e2e-tests-statefulset-77gw7 Jul 1 08:18:35.809: INFO: Scaling statefulset ss to 0 Jul 1 08:18:55.906: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 08:18:55.909: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:18:55.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-77gw7" for this suite. Jul 1 08:19:01.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:19:02.039: INFO: namespace: e2e-tests-statefulset-77gw7, resource: bindings, ignored listing per whitelist Jul 1 08:19:02.072: INFO: namespace e2e-tests-statefulset-77gw7 deletion completed in 6.10012514s • [SLOW TEST:42.927 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:19:02.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:19:02.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cjjvr" for this suite. Jul 1 08:19:08.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:19:08.387: INFO: namespace: e2e-tests-services-cjjvr, resource: bindings, ignored listing per whitelist Jul 1 08:19:08.451: INFO: namespace e2e-tests-services-cjjvr deletion completed in 6.157289346s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.379 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:19:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:19:08.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-bscl8" to be "success or failure" Jul 1 08:19:08.602: INFO: Pod "downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.896319ms Jul 1 08:19:10.606: INFO: Pod "downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021874388s Jul 1 08:19:12.610: INFO: Pod "downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025805124s STEP: Saw pod success Jul 1 08:19:12.610: INFO: Pod "downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:19:12.612: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:19:12.684: INFO: Waiting for pod downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:19:12.696: INFO: Pod downwardapi-volume-8934bd31-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:19:12.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bscl8" for this suite. Jul 1 08:19:18.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:19:18.759: INFO: namespace: e2e-tests-downward-api-bscl8, resource: bindings, ignored listing per whitelist Jul 1 08:19:18.860: INFO: namespace e2e-tests-downward-api-bscl8 deletion completed in 6.161005083s • [SLOW TEST:10.409 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:19:18.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:19:23.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tv6z2" for this suite. Jul 1 08:20:13.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:20:13.071: INFO: namespace: e2e-tests-kubelet-test-tv6z2, resource: bindings, ignored listing per whitelist Jul 1 08:20:13.139: INFO: namespace e2e-tests-kubelet-test-tv6z2 deletion completed in 50.120266543s • [SLOW TEST:54.278 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:20:13.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-afc19695-bb73-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:20:13.306: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-4rbtb" to be "success or failure" Jul 1 08:20:13.317: INFO: Pod "pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.753595ms Jul 1 08:20:15.521: INFO: Pod "pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215124379s Jul 1 08:20:17.525: INFO: Pod "pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219224438s STEP: Saw pod success Jul 1 08:20:17.525: INFO: Pod "pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:20:17.528: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 08:20:17.654: INFO: Waiting for pod pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:20:17.663: INFO: Pod pod-projected-configmaps-afc85b29-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:20:17.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4rbtb" for this suite. Jul 1 08:20:23.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:20:23.724: INFO: namespace: e2e-tests-projected-4rbtb, resource: bindings, ignored listing per whitelist Jul 1 08:20:23.778: INFO: namespace e2e-tests-projected-4rbtb deletion completed in 6.111145457s • [SLOW TEST:10.639 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:20:23.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jul 1 08:20:23.916: INFO: Waiting up to 5m0s for pod "var-expansion-b619a485-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-var-expansion-tdhrg" to be "success or failure" Jul 1 08:20:23.932: INFO: Pod "var-expansion-b619a485-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.174609ms Jul 1 08:20:25.982: INFO: Pod "var-expansion-b619a485-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066179835s Jul 1 08:20:27.987: INFO: Pod "var-expansion-b619a485-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070553091s STEP: Saw pod success Jul 1 08:20:27.987: INFO: Pod "var-expansion-b619a485-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:20:27.989: INFO: Trying to get logs from node hunter-worker pod var-expansion-b619a485-bb73-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 08:20:28.065: INFO: Waiting for pod var-expansion-b619a485-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:20:28.131: INFO: Pod var-expansion-b619a485-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:20:28.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-tdhrg" for this suite. Jul 1 08:20:34.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:20:34.236: INFO: namespace: e2e-tests-var-expansion-tdhrg, resource: bindings, ignored listing per whitelist Jul 1 08:20:34.270: INFO: namespace e2e-tests-var-expansion-tdhrg deletion completed in 6.135006281s • [SLOW TEST:10.491 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:20:34.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 08:20:34.427: INFO: Waiting up to 5m0s for pod "downward-api-bc5dc532-bb73-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-rhb5g" to be "success or failure" Jul 1 08:20:34.430: INFO: Pod "downward-api-bc5dc532-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469343ms Jul 1 08:20:36.435: INFO: Pod "downward-api-bc5dc532-bb73-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008289042s Jul 1 08:20:38.440: INFO: Pod "downward-api-bc5dc532-bb73-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012820487s STEP: Saw pod success Jul 1 08:20:38.440: INFO: Pod "downward-api-bc5dc532-bb73-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:20:38.443: INFO: Trying to get logs from node hunter-worker pod downward-api-bc5dc532-bb73-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 08:20:38.462: INFO: Waiting for pod downward-api-bc5dc532-bb73-11ea-a133-0242ac110018 to disappear Jul 1 08:20:38.466: INFO: Pod downward-api-bc5dc532-bb73-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:20:38.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rhb5g" for this suite. Jul 1 08:20:44.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:20:44.625: INFO: namespace: e2e-tests-downward-api-rhb5g, resource: bindings, ignored listing per whitelist Jul 1 08:20:44.643: INFO: namespace e2e-tests-downward-api-rhb5g deletion completed in 6.150956593s • [SLOW TEST:10.372 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:20:44.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-c28f767b-bb73-11ea-a133-0242ac110018 STEP: Creating secret with name s-test-opt-upd-c28f777d-bb73-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c28f767b-bb73-11ea-a133-0242ac110018 STEP: Updating secret s-test-opt-upd-c28f777d-bb73-11ea-a133-0242ac110018 STEP: Creating secret with name s-test-opt-create-c28f77ad-bb73-11ea-a133-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:20:54.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q6c8d" for this suite. Jul 1 08:21:17.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:21:17.047: INFO: namespace: e2e-tests-projected-q6c8d, resource: bindings, ignored listing per whitelist Jul 1 08:21:17.112: INFO: namespace e2e-tests-projected-q6c8d deletion completed in 22.110061253s • [SLOW TEST:32.469 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:21:17.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 1 08:21:17.248: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:21:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9hmkf" for this suite. Jul 1 08:21:33.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:21:33.857: INFO: namespace: e2e-tests-init-container-9hmkf, resource: bindings, ignored listing per whitelist Jul 1 08:21:33.858: INFO: namespace e2e-tests-init-container-9hmkf deletion completed in 8.070622844s • [SLOW TEST:16.746 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:21:33.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 08:21:33.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6f678' Jul 1 08:21:37.752: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 08:21:37.752: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jul 1 08:21:37.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-6f678' Jul 1 08:21:38.477: INFO: stderr: "" Jul 1 08:21:38.477: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:21:38.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6f678" for this suite. Jul 1 08:21:44.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:21:44.787: INFO: namespace: e2e-tests-kubectl-6f678, resource: bindings, ignored listing per whitelist Jul 1 08:21:44.891: INFO: namespace e2e-tests-kubectl-6f678 deletion completed in 6.385122286s • [SLOW TEST:11.033 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:21:44.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-cmrfm Jul 1 08:21:51.109: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-cmrfm STEP: checking the pod's current state and verifying that restartCount is present Jul 1 08:21:51.112: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:25:52.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cmrfm" for this suite. Jul 1 08:25:58.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:25:58.331: INFO: namespace: e2e-tests-container-probe-cmrfm, resource: bindings, ignored listing per whitelist Jul 1 08:25:58.358: INFO: namespace e2e-tests-container-probe-cmrfm deletion completed in 6.142712466s • [SLOW TEST:253.466 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:25:58.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7db387d2-bb74-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 08:25:58.863: INFO: Waiting up to 5m0s for pod "pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-g996n" to be "success or failure" Jul 1 08:25:58.885: INFO: Pod "pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.039418ms Jul 1 08:26:01.017: INFO: Pod "pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153983022s Jul 1 08:26:03.127: INFO: Pod "pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264012004s Jul 1 08:26:05.132: INFO: Pod "pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268216633s STEP: Saw pod success Jul 1 08:26:05.132: INFO: Pod "pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:26:05.135: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 08:26:05.270: INFO: Waiting for pod pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018 to disappear Jul 1 08:26:05.279: INFO: Pod pod-secrets-7db5fc96-bb74-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:26:05.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-g996n" for this suite. Jul 1 08:26:11.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:26:11.462: INFO: namespace: e2e-tests-secrets-g996n, resource: bindings, ignored listing per whitelist Jul 1 08:26:11.489: INFO: namespace e2e-tests-secrets-g996n deletion completed in 6.20774049s • [SLOW TEST:13.131 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:26:11.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:26:11.723: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 1 08:26:11.765: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 1 08:26:16.862: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 08:26:16.862: INFO: Creating deployment "test-rolling-update-deployment" Jul 1 08:26:16.865: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 1 08:26:16.907: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 1 08:26:18.914: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 1 08:26:18.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 08:26:21.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729188777, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 08:26:23.305: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 1 08:26:23.387: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-qp66d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qp66d/deployments/test-rolling-update-deployment,UID:887bc08f-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829304,Generation:1,CreationTimestamp:2020-07-01 08:26:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-01 08:26:17 +0000 UTC 2020-07-01 08:26:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-01 08:26:22 +0000 UTC 2020-07-01 08:26:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 1 08:26:23.477: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-qp66d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qp66d/replicasets/test-rolling-update-deployment-75db98fb4c,UID:8883601f-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829294,Generation:1,CreationTimestamp:2020-07-01 08:26:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 887bc08f-bb74-11ea-99e8-0242ac110002 0xc0023741d7 0xc0023741d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 1 08:26:23.477: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 1 08:26:23.477: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-qp66d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qp66d/replicasets/test-rolling-update-controller,UID:856b9303-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829303,Generation:2,CreationTimestamp:2020-07-01 08:26:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 887bc08f-bb74-11ea-99e8-0242ac110002 0xc0023740bf 0xc0023740d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 08:26:23.485: INFO: Pod "test-rolling-update-deployment-75db98fb4c-kc222" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-kc222,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-qp66d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qp66d/pods/test-rolling-update-deployment-75db98fb4c-kc222,UID:88c24d59-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829293,Generation:0,CreationTimestamp:2020-07-01 08:26:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 8883601f-bb74-11ea-99e8-0242ac110002 0xc002388367 0xc002388368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lb9xh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb9xh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lb9xh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002388770} {node.kubernetes.io/unreachable Exists NoExecute 0xc002388790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:26:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:26:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:26:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:26:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.73,StartTime:2020-07-01 08:26:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-01 08:26:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://5414698e9ac4c1d3433a29363154558235af38f00e9ca84617bfb624fef63f92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:26:23.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-qp66d" for this suite. Jul 1 08:26:29.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:26:29.696: INFO: namespace: e2e-tests-deployment-qp66d, resource: bindings, ignored listing per whitelist Jul 1 08:26:29.737: INFO: namespace e2e-tests-deployment-qp66d deletion completed in 6.248841306s • [SLOW TEST:18.247 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:26:29.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 1 08:26:30.167: INFO: Waiting up to 5m0s for pod "pod-90662aaa-bb74-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-46q45" to be "success or failure" Jul 1 08:26:30.343: INFO: Pod "pod-90662aaa-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 175.303432ms Jul 1 08:26:32.346: INFO: Pod "pod-90662aaa-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178975707s Jul 1 08:26:34.364: INFO: Pod "pod-90662aaa-bb74-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196936101s STEP: Saw pod success Jul 1 08:26:34.364: INFO: Pod "pod-90662aaa-bb74-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:26:34.367: INFO: Trying to get logs from node hunter-worker pod pod-90662aaa-bb74-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:26:34.435: INFO: Waiting for pod pod-90662aaa-bb74-11ea-a133-0242ac110018 to disappear Jul 1 08:26:34.468: INFO: Pod pod-90662aaa-bb74-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:26:34.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-46q45" for this suite. Jul 1 08:26:40.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:26:40.546: INFO: namespace: e2e-tests-emptydir-46q45, resource: bindings, ignored listing per whitelist Jul 1 08:26:40.583: INFO: namespace e2e-tests-emptydir-46q45 deletion completed in 6.112316598s • [SLOW TEST:10.846 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:26:40.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-zkpw STEP: Creating a pod to test atomic-volume-subpath Jul 1 08:26:40.751: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zkpw" in namespace "e2e-tests-subpath-mhtrg" to be "success or failure" Jul 1 08:26:40.769: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.293859ms Jul 1 08:26:42.774: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022899123s Jul 1 08:26:44.777: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026308314s Jul 1 08:26:47.278: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526769888s Jul 1 08:26:49.282: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 8.530740127s Jul 1 08:26:51.286: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 10.535107315s Jul 1 08:26:53.290: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 12.538968517s Jul 1 08:26:55.294: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 14.543234984s Jul 1 08:26:57.298: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 16.546612526s Jul 1 08:26:59.302: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 18.551144625s Jul 1 08:27:01.307: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 20.555806618s Jul 1 08:27:03.311: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 22.560240423s Jul 1 08:27:05.316: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Running", Reason="", readiness=false. Elapsed: 24.564848898s Jul 1 08:27:07.320: INFO: Pod "pod-subpath-test-configmap-zkpw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.568897858s STEP: Saw pod success Jul 1 08:27:07.320: INFO: Pod "pod-subpath-test-configmap-zkpw" satisfied condition "success or failure" Jul 1 08:27:07.324: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-zkpw container test-container-subpath-configmap-zkpw: STEP: delete the pod Jul 1 08:27:07.366: INFO: Waiting for pod pod-subpath-test-configmap-zkpw to disappear Jul 1 08:27:07.430: INFO: Pod pod-subpath-test-configmap-zkpw no longer exists STEP: Deleting pod pod-subpath-test-configmap-zkpw Jul 1 08:27:07.430: INFO: Deleting pod "pod-subpath-test-configmap-zkpw" in namespace "e2e-tests-subpath-mhtrg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:27:07.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mhtrg" for this suite. Jul 1 08:27:13.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:27:13.567: INFO: namespace: e2e-tests-subpath-mhtrg, resource: bindings, ignored listing per whitelist Jul 1 08:27:13.632: INFO: namespace e2e-tests-subpath-mhtrg deletion completed in 6.123466796s • [SLOW TEST:33.049 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:27:13.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 1 08:27:13.908: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2kmh6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2kmh6/configmaps/e2e-watch-test-watch-closed,UID:aa723b15-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829504,Generation:0,CreationTimestamp:2020-07-01 08:27:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 08:27:13.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2kmh6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2kmh6/configmaps/e2e-watch-test-watch-closed,UID:aa723b15-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829505,Generation:0,CreationTimestamp:2020-07-01 08:27:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 1 08:27:13.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2kmh6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2kmh6/configmaps/e2e-watch-test-watch-closed,UID:aa723b15-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829506,Generation:0,CreationTimestamp:2020-07-01 08:27:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 08:27:13.943: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2kmh6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2kmh6/configmaps/e2e-watch-test-watch-closed,UID:aa723b15-bb74-11ea-99e8-0242ac110002,ResourceVersion:18829507,Generation:0,CreationTimestamp:2020-07-01 08:27:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:27:13.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2kmh6" for this suite. Jul 1 08:27:19.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:27:20.040: INFO: namespace: e2e-tests-watch-2kmh6, resource: bindings, ignored listing per whitelist Jul 1 08:27:20.044: INFO: namespace e2e-tests-watch-2kmh6 deletion completed in 6.0970353s • [SLOW TEST:6.412 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:27:20.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-df87z STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 08:27:20.209: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 08:27:44.417: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.231:8080/dial?request=hostName&protocol=udp&host=10.244.2.75&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-df87z PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 08:27:44.417: INFO: >>> kubeConfig: /root/.kube/config I0701 08:27:44.449543 6 log.go:172] (0xc0000ebef0) (0xc0024bc960) Create stream I0701 08:27:44.449573 6 log.go:172] (0xc0000ebef0) (0xc0024bc960) Stream added, broadcasting: 1 I0701 08:27:44.452032 6 log.go:172] (0xc0000ebef0) Reply frame received for 1 I0701 08:27:44.452076 6 log.go:172] (0xc0000ebef0) (0xc002199860) Create stream I0701 08:27:44.452090 6 log.go:172] (0xc0000ebef0) (0xc002199860) Stream added, broadcasting: 3 I0701 08:27:44.453075 6 log.go:172] (0xc0000ebef0) Reply frame received for 3 I0701 08:27:44.453315 6 log.go:172] (0xc0000ebef0) (0xc0024bca00) Create stream I0701 08:27:44.453347 6 log.go:172] (0xc0000ebef0) (0xc0024bca00) Stream added, broadcasting: 5 I0701 08:27:44.454121 6 log.go:172] (0xc0000ebef0) Reply frame received for 5 I0701 08:27:44.638786 6 log.go:172] (0xc0000ebef0) Data frame received for 3 I0701 08:27:44.638826 6 log.go:172] (0xc002199860) (3) Data frame handling I0701 08:27:44.638853 6 log.go:172] (0xc002199860) (3) Data frame sent I0701 08:27:44.639692 6 log.go:172] (0xc0000ebef0) Data frame received for 3 I0701 08:27:44.639713 6 log.go:172] (0xc002199860) (3) Data frame handling I0701 08:27:44.639731 6 log.go:172] (0xc0000ebef0) Data frame received for 5 I0701 08:27:44.639763 6 log.go:172] (0xc0024bca00) (5) Data frame handling I0701 08:27:44.642713 6 log.go:172] (0xc0000ebef0) Data frame received for 1 I0701 08:27:44.642757 6 log.go:172] (0xc0024bc960) (1) Data frame handling I0701 08:27:44.642796 6 log.go:172] (0xc0024bc960) (1) Data frame sent I0701 08:27:44.642822 6 log.go:172] (0xc0000ebef0) (0xc0024bc960) Stream removed, broadcasting: 1 I0701 08:27:44.642951 6 log.go:172] (0xc0000ebef0) (0xc0024bc960) Stream removed, broadcasting: 1 I0701 08:27:44.642975 6 log.go:172] (0xc0000ebef0) (0xc002199860) Stream removed, broadcasting: 3 I0701 08:27:44.642989 6 log.go:172] (0xc0000ebef0) (0xc0024bca00) Stream removed, broadcasting: 5 Jul 1 08:27:44.643: INFO: Waiting for endpoints: map[] I0701 08:27:44.643356 6 log.go:172] (0xc0000ebef0) Go away received Jul 1 08:27:44.786: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.231:8080/dial?request=hostName&protocol=udp&host=10.244.1.230&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-df87z PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 08:27:44.786: INFO: >>> kubeConfig: /root/.kube/config I0701 08:27:44.814134 6 log.go:172] (0xc000a0a630) (0xc0024bcd20) Create stream I0701 08:27:44.814166 6 log.go:172] (0xc000a0a630) (0xc0024bcd20) Stream added, broadcasting: 1 I0701 08:27:44.816726 6 log.go:172] (0xc000a0a630) Reply frame received for 1 I0701 08:27:44.816768 6 log.go:172] (0xc000a0a630) (0xc0024bcdc0) Create stream I0701 08:27:44.816784 6 log.go:172] (0xc000a0a630) (0xc0024bcdc0) Stream added, broadcasting: 3 I0701 08:27:44.818044 6 log.go:172] (0xc000a0a630) Reply frame received for 3 I0701 08:27:44.818072 6 log.go:172] (0xc000a0a630) (0xc0024bcf00) Create stream I0701 08:27:44.818082 6 log.go:172] (0xc000a0a630) (0xc0024bcf00) Stream added, broadcasting: 5 I0701 08:27:44.818853 6 log.go:172] (0xc000a0a630) Reply frame received for 5 I0701 08:27:44.922527 6 log.go:172] (0xc000a0a630) Data frame received for 3 I0701 08:27:44.922566 6 log.go:172] (0xc0024bcdc0) (3) Data frame handling I0701 08:27:44.922613 6 log.go:172] (0xc0024bcdc0) (3) Data frame sent I0701 08:27:44.923105 6 log.go:172] (0xc000a0a630) Data frame received for 5 I0701 08:27:44.923145 6 log.go:172] (0xc0024bcf00) (5) Data frame handling I0701 08:27:44.923187 6 log.go:172] (0xc000a0a630) Data frame received for 3 I0701 08:27:44.923219 6 log.go:172] (0xc0024bcdc0) (3) Data frame handling I0701 08:27:44.924665 6 log.go:172] (0xc000a0a630) Data frame received for 1 I0701 08:27:44.924700 6 log.go:172] (0xc0024bcd20) (1) Data frame handling I0701 08:27:44.924718 6 log.go:172] (0xc0024bcd20) (1) Data frame sent I0701 08:27:44.924737 6 log.go:172] (0xc000a0a630) (0xc0024bcd20) Stream removed, broadcasting: 1 I0701 08:27:44.924853 6 log.go:172] (0xc000a0a630) (0xc0024bcd20) Stream removed, broadcasting: 1 I0701 08:27:44.924870 6 log.go:172] (0xc000a0a630) (0xc0024bcdc0) Stream removed, broadcasting: 3 I0701 08:27:44.924896 6 log.go:172] (0xc000a0a630) Go away received I0701 08:27:44.925040 6 log.go:172] (0xc000a0a630) (0xc0024bcf00) Stream removed, broadcasting: 5 Jul 1 08:27:44.925: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:27:44.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-df87z" for this suite. Jul 1 08:28:10.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:28:11.009: INFO: namespace: e2e-tests-pod-network-test-df87z, resource: bindings, ignored listing per whitelist Jul 1 08:28:11.042: INFO: namespace e2e-tests-pod-network-test-df87z deletion completed in 26.113275285s • [SLOW TEST:50.997 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:28:11.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:28:11.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-s9tdg" to be "success or failure" Jul 1 08:28:11.324: INFO: Pod "downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 111.237216ms Jul 1 08:28:13.328: INFO: Pod "downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114570051s Jul 1 08:28:15.332: INFO: Pod "downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118553083s STEP: Saw pod success Jul 1 08:28:15.332: INFO: Pod "downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:28:15.334: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:28:15.355: INFO: Waiting for pod downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018 to disappear Jul 1 08:28:15.359: INFO: Pod downwardapi-volume-cc9f3223-bb74-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:28:15.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s9tdg" for this suite. Jul 1 08:28:21.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:28:21.440: INFO: namespace: e2e-tests-projected-s9tdg, resource: bindings, ignored listing per whitelist Jul 1 08:28:21.479: INFO: namespace e2e-tests-projected-s9tdg deletion completed in 6.115315347s • [SLOW TEST:10.437 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:28:21.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-d2cf31bc-bb74-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 08:28:21.598: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-k2plj" to be "success or failure" Jul 1 08:28:21.601: INFO: Pod "pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983613ms Jul 1 08:28:23.606: INFO: Pod "pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007443047s Jul 1 08:28:26.067: INFO: Pod "pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.469247303s Jul 1 08:28:28.072: INFO: Pod "pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.473689948s STEP: Saw pod success Jul 1 08:28:28.072: INFO: Pod "pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:28:28.075: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jul 1 08:28:28.224: INFO: Waiting for pod pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018 to disappear Jul 1 08:28:28.243: INFO: Pod pod-projected-secrets-d2d1f135-bb74-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:28:28.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k2plj" for this suite. Jul 1 08:28:34.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:28:34.336: INFO: namespace: e2e-tests-projected-k2plj, resource: bindings, ignored listing per whitelist Jul 1 08:28:34.352: INFO: namespace e2e-tests-projected-k2plj deletion completed in 6.105321888s • [SLOW TEST:12.873 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:28:34.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-njxd7 Jul 1 08:28:40.515: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-njxd7 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 08:28:40.518: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:32:40.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-njxd7" for this suite. Jul 1 08:32:46.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:32:46.972: INFO: namespace: e2e-tests-container-probe-njxd7, resource: bindings, ignored listing per whitelist Jul 1 08:32:47.003: INFO: namespace e2e-tests-container-probe-njxd7 deletion completed in 6.10363447s • [SLOW TEST:252.651 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:32:47.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 08:32:47.152: INFO: Waiting up to 5m0s for pod "downward-api-711540c5-bb75-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-lg4wc" to be "success or failure" Jul 1 08:32:47.186: INFO: Pod "downward-api-711540c5-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.417329ms Jul 1 08:32:49.190: INFO: Pod "downward-api-711540c5-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038121582s Jul 1 08:32:51.194: INFO: Pod "downward-api-711540c5-bb75-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042444854s STEP: Saw pod success Jul 1 08:32:51.194: INFO: Pod "downward-api-711540c5-bb75-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:32:51.197: INFO: Trying to get logs from node hunter-worker pod downward-api-711540c5-bb75-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 08:32:51.224: INFO: Waiting for pod downward-api-711540c5-bb75-11ea-a133-0242ac110018 to disappear Jul 1 08:32:51.232: INFO: Pod downward-api-711540c5-bb75-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:32:51.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lg4wc" for this suite. Jul 1 08:32:57.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:32:57.425: INFO: namespace: e2e-tests-downward-api-lg4wc, resource: bindings, ignored listing per whitelist Jul 1 08:32:57.437: INFO: namespace e2e-tests-downward-api-lg4wc deletion completed in 6.201940429s • [SLOW TEST:10.433 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:32:57.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 1 08:32:57.531: INFO: Waiting up to 5m0s for pod "pod-774a612b-bb75-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-4v7d6" to be "success or failure" Jul 1 08:32:57.568: INFO: Pod "pod-774a612b-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.234026ms Jul 1 08:32:59.572: INFO: Pod "pod-774a612b-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041594425s Jul 1 08:33:01.576: INFO: Pod "pod-774a612b-bb75-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045204277s STEP: Saw pod success Jul 1 08:33:01.576: INFO: Pod "pod-774a612b-bb75-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:33:01.578: INFO: Trying to get logs from node hunter-worker pod pod-774a612b-bb75-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:33:01.630: INFO: Waiting for pod pod-774a612b-bb75-11ea-a133-0242ac110018 to disappear Jul 1 08:33:01.636: INFO: Pod pod-774a612b-bb75-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:33:01.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4v7d6" for this suite. Jul 1 08:33:07.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:33:07.731: INFO: namespace: e2e-tests-emptydir-4v7d6, resource: bindings, ignored listing per whitelist Jul 1 08:33:07.733: INFO: namespace e2e-tests-emptydir-4v7d6 deletion completed in 6.093620237s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:33:07.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 08:33:12.710: INFO: Successfully updated pod "annotationupdate7d6c5088-bb75-11ea-a133-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:33:14.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sv5np" for this suite. Jul 1 08:33:36.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:33:36.837: INFO: namespace: e2e-tests-projected-sv5np, resource: bindings, ignored listing per whitelist Jul 1 08:33:36.864: INFO: namespace e2e-tests-projected-sv5np deletion completed in 22.0982282s • [SLOW TEST:29.130 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:33:36.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jul 1 08:33:41.166: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:34:05.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-x5dq6" for this suite. Jul 1 08:34:11.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:34:11.322: INFO: namespace: e2e-tests-namespaces-x5dq6, resource: bindings, ignored listing per whitelist Jul 1 08:34:11.366: INFO: namespace e2e-tests-namespaces-x5dq6 deletion completed in 6.09831796s STEP: Destroying namespace "e2e-tests-nsdeletetest-vg4lw" for this suite. Jul 1 08:34:11.368: INFO: Namespace e2e-tests-nsdeletetest-vg4lw was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-nz8lw" for this suite. Jul 1 08:34:17.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:34:17.424: INFO: namespace: e2e-tests-nsdeletetest-nz8lw, resource: bindings, ignored listing per whitelist Jul 1 08:34:17.586: INFO: namespace e2e-tests-nsdeletetest-nz8lw deletion completed in 6.217913429s • [SLOW TEST:40.722 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:34:17.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:34:41.796: INFO: Container started at 2020-07-01 08:34:21 +0000 UTC, pod became ready at 2020-07-01 08:34:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:34:41.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4tqnb" for this suite. Jul 1 08:35:03.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:35:03.876: INFO: namespace: e2e-tests-container-probe-4tqnb, resource: bindings, ignored listing per whitelist Jul 1 08:35:03.903: INFO: namespace e2e-tests-container-probe-4tqnb deletion completed in 22.102615828s • [SLOW TEST:46.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:35:03.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:35:14.451: INFO: Waiting up to 5m0s for pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018" in namespace "e2e-tests-pods-wxj6z" to be "success or failure" Jul 1 08:35:14.870: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 419.078687ms Jul 1 08:35:16.873: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422242388s Jul 1 08:35:18.942: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490559842s Jul 1 08:35:20.946: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494862639s Jul 1 08:35:22.950: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.498326953s Jul 1 08:35:24.953: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.502175238s Jul 1 08:35:26.956: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.505261171s STEP: Saw pod success Jul 1 08:35:26.957: INFO: Pod "client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:35:26.959: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018 container env3cont: STEP: delete the pod Jul 1 08:35:27.399: INFO: Waiting for pod client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018 to disappear Jul 1 08:35:27.413: INFO: Pod client-envvars-c8e575f2-bb75-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:35:27.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wxj6z" for this suite. Jul 1 08:36:13.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:36:13.486: INFO: namespace: e2e-tests-pods-wxj6z, resource: bindings, ignored listing per whitelist Jul 1 08:36:13.506: INFO: namespace e2e-tests-pods-wxj6z deletion completed in 46.090208558s • [SLOW TEST:69.603 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:36:13.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 1 08:36:13.625: INFO: namespace e2e-tests-kubectl-8vrjg Jul 1 08:36:13.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8vrjg' Jul 1 08:36:17.857: INFO: stderr: "" Jul 1 08:36:17.857: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 1 08:36:18.862: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:36:18.862: INFO: Found 0 / 1 Jul 1 08:36:19.937: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:36:19.937: INFO: Found 0 / 1 Jul 1 08:36:20.862: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:36:20.862: INFO: Found 0 / 1 Jul 1 08:36:21.862: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:36:21.862: INFO: Found 1 / 1 Jul 1 08:36:21.862: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 08:36:21.866: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:36:21.866: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 08:36:21.866: INFO: wait on redis-master startup in e2e-tests-kubectl-8vrjg Jul 1 08:36:21.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d2bjd redis-master --namespace=e2e-tests-kubectl-8vrjg' Jul 1 08:36:21.996: INFO: stderr: "" Jul 1 08:36:21.996: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jul 08:36:20.973 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jul 08:36:20.973 # Server started, Redis version 3.2.12\n1:M 01 Jul 08:36:20.973 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jul 08:36:20.973 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 1 08:36:21.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8vrjg' Jul 1 08:36:22.171: INFO: stderr: "" Jul 1 08:36:22.171: INFO: stdout: "service/rm2 exposed\n" Jul 1 08:36:22.182: INFO: Service rm2 in namespace e2e-tests-kubectl-8vrjg found. STEP: exposing service Jul 1 08:36:24.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8vrjg' Jul 1 08:36:24.365: INFO: stderr: "" Jul 1 08:36:24.365: INFO: stdout: "service/rm3 exposed\n" Jul 1 08:36:24.372: INFO: Service rm3 in namespace e2e-tests-kubectl-8vrjg found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:36:26.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8vrjg" for this suite. Jul 1 08:36:50.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:36:50.474: INFO: namespace: e2e-tests-kubectl-8vrjg, resource: bindings, ignored listing per whitelist Jul 1 08:36:50.476: INFO: namespace e2e-tests-kubectl-8vrjg deletion completed in 24.09031953s • [SLOW TEST:36.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:36:50.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 1 08:36:50.566: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 08:36:50.574: INFO: Waiting for terminating namespaces to be deleted... Jul 1 08:36:50.577: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 1 08:36:50.582: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jul 1 08:36:50.582: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 08:36:50.582: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 08:36:50.582: INFO: Container kindnet-cni ready: true, restart count 0 Jul 1 08:36:50.582: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jul 1 08:36:50.582: INFO: Container coredns ready: true, restart count 0 Jul 1 08:36:50.582: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 1 08:36:50.587: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 08:36:50.587: INFO: Container kindnet-cni ready: true, restart count 0 Jul 1 08:36:50.587: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jul 1 08:36:50.587: INFO: Container coredns ready: true, restart count 0 Jul 1 08:36:50.587: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 08:36:50.587: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-049fb165-bb76-11ea-a133-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-049fb165-bb76-11ea-a133-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-049fb165-bb76-11ea-a133-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:36:58.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-tpf2m" for this suite. Jul 1 08:37:08.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:37:08.827: INFO: namespace: e2e-tests-sched-pred-tpf2m, resource: bindings, ignored listing per whitelist Jul 1 08:37:08.842: INFO: namespace e2e-tests-sched-pred-tpf2m deletion completed in 10.077931964s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.366 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:37:08.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jul 1 08:37:09.391: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:37:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-44lwr" for this suite. Jul 1 08:37:15.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:37:15.772: INFO: namespace: e2e-tests-kubectl-44lwr, resource: bindings, ignored listing per whitelist Jul 1 08:37:15.873: INFO: namespace e2e-tests-kubectl-44lwr deletion completed in 6.402210906s • [SLOW TEST:7.030 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:37:15.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 08:37:24.030: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 08:37:24.130: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 08:37:26.130: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 08:37:26.135: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 08:37:28.130: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 08:37:28.134: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 08:37:30.130: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 08:37:30.134: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 08:37:32.130: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 08:37:32.134: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:37:32.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dj7dp" for this suite. Jul 1 08:37:54.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:37:54.203: INFO: namespace: e2e-tests-container-lifecycle-hook-dj7dp, resource: bindings, ignored listing per whitelist Jul 1 08:37:54.233: INFO: namespace e2e-tests-container-lifecycle-hook-dj7dp deletion completed in 22.089878084s • [SLOW TEST:38.360 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:37:54.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2837c199-bb76-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 08:37:54.389: INFO: Waiting up to 5m0s for pod "pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-7t8gd" to be "success or failure" Jul 1 08:37:54.393: INFO: Pod "pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229923ms Jul 1 08:37:56.477: INFO: Pod "pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088399794s Jul 1 08:37:58.729: INFO: Pod "pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340388271s Jul 1 08:38:00.734: INFO: Pod "pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345204476s STEP: Saw pod success Jul 1 08:38:00.734: INFO: Pod "pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:38:00.738: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 08:38:00.833: INFO: Waiting for pod pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018 to disappear Jul 1 08:38:00.856: INFO: Pod pod-secrets-2839cfc7-bb76-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:38:00.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7t8gd" for this suite. Jul 1 08:38:06.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:38:06.914: INFO: namespace: e2e-tests-secrets-7t8gd, resource: bindings, ignored listing per whitelist Jul 1 08:38:06.981: INFO: namespace e2e-tests-secrets-7t8gd deletion completed in 6.122604956s • [SLOW TEST:12.748 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:38:06.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:38:11.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-kdclm" for this suite. Jul 1 08:38:53.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:38:53.316: INFO: namespace: e2e-tests-kubelet-test-kdclm, resource: bindings, ignored listing per whitelist Jul 1 08:38:53.389: INFO: namespace e2e-tests-kubelet-test-kdclm deletion completed in 42.155696924s • [SLOW TEST:46.408 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:38:53.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-wwrx STEP: Creating a pod to test atomic-volume-subpath Jul 1 08:38:53.513: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wwrx" in namespace "e2e-tests-subpath-57lzj" to be "success or failure" Jul 1 08:38:53.556: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Pending", Reason="", readiness=false. Elapsed: 42.724449ms Jul 1 08:38:55.561: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047813396s Jul 1 08:38:57.564: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050777119s Jul 1 08:38:59.568: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 6.054810899s Jul 1 08:39:01.575: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 8.061633548s Jul 1 08:39:03.580: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 10.067324603s Jul 1 08:39:05.584: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 12.070998717s Jul 1 08:39:07.590: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 14.076999679s Jul 1 08:39:09.600: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 16.087239128s Jul 1 08:39:11.605: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 18.092073888s Jul 1 08:39:13.628: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 20.114889029s Jul 1 08:39:15.632: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 22.119163139s Jul 1 08:39:19.024: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Running", Reason="", readiness=false. Elapsed: 25.511263089s Jul 1 08:39:21.028: INFO: Pod "pod-subpath-test-projected-wwrx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.515066797s STEP: Saw pod success Jul 1 08:39:21.028: INFO: Pod "pod-subpath-test-projected-wwrx" satisfied condition "success or failure" Jul 1 08:39:21.030: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-wwrx container test-container-subpath-projected-wwrx: STEP: delete the pod Jul 1 08:39:21.094: INFO: Waiting for pod pod-subpath-test-projected-wwrx to disappear Jul 1 08:39:21.115: INFO: Pod pod-subpath-test-projected-wwrx no longer exists STEP: Deleting pod pod-subpath-test-projected-wwrx Jul 1 08:39:21.115: INFO: Deleting pod "pod-subpath-test-projected-wwrx" in namespace "e2e-tests-subpath-57lzj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:39:21.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-57lzj" for this suite. Jul 1 08:39:27.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:39:27.263: INFO: namespace: e2e-tests-subpath-57lzj, resource: bindings, ignored listing per whitelist Jul 1 08:39:27.285: INFO: namespace e2e-tests-subpath-57lzj deletion completed in 6.165258212s • [SLOW TEST:33.896 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:39:27.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0701 08:39:57.959764 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 08:39:57.959: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:39:57.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4bf8q" for this suite. Jul 1 08:40:04.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:40:04.046: INFO: namespace: e2e-tests-gc-4bf8q, resource: bindings, ignored listing per whitelist Jul 1 08:40:04.081: INFO: namespace e2e-tests-gc-4bf8q deletion completed in 6.117653792s • [SLOW TEST:36.795 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:40:04.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-75b0c8a2-bb76-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 08:40:04.343: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-p62l2" to be "success or failure" Jul 1 08:40:04.348: INFO: Pod "pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835452ms Jul 1 08:40:06.353: INFO: Pod "pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009138656s Jul 1 08:40:08.361: INFO: Pod "pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017928869s STEP: Saw pod success Jul 1 08:40:08.361: INFO: Pod "pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:40:08.364: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 08:40:08.401: INFO: Waiting for pod pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018 to disappear Jul 1 08:40:08.414: INFO: Pod pod-projected-secrets-75b21460-bb76-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:40:08.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p62l2" for this suite. Jul 1 08:40:14.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:40:14.518: INFO: namespace: e2e-tests-projected-p62l2, resource: bindings, ignored listing per whitelist Jul 1 08:40:14.547: INFO: namespace e2e-tests-projected-p62l2 deletion completed in 6.09189252s • [SLOW TEST:10.466 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:40:14.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jul 1 08:40:18.767: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-7bd7afb1-bb76-11ea-a133-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-h44nj", SelfLink:"/api/v1/namespaces/e2e-tests-pods-h44nj/pods/pod-submit-remove-7bd7afb1-bb76-11ea-a133-0242ac110018", UID:"7bdcfee6-bb76-11ea-99e8-0242ac110002", ResourceVersion:"18831670", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729189614, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"648933961"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2gstx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f6cd00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2gstx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00162e8f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c0e540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00162e940)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00162e960)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00162e968), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00162e96c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729189614, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729189618, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729189618, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729189614, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.86", StartTime:(*v1.Time)(0xc001221960), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001221980), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://c1ace50398ff25c84a14381fe60f2a7f4ba19ad9a19eafc76d6e3a8aa94426bb"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:40:31.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-h44nj" for this suite. Jul 1 08:40:37.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:40:37.967: INFO: namespace: e2e-tests-pods-h44nj, resource: bindings, ignored listing per whitelist Jul 1 08:40:38.022: INFO: namespace e2e-tests-pods-h44nj deletion completed in 6.12743764s • [SLOW TEST:23.475 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:40:38.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-mjp2z;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mjp2z.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 250.71.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.71.250_udp@PTR;check="$$(dig +tcp +noall +answer +search 250.71.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.71.250_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-mjp2z;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mjp2z.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-mjp2z.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mjp2z.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 250.71.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.71.250_udp@PTR;check="$$(dig +tcp +noall +answer +search 250.71.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.71.250_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 08:40:46.351: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.361: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.389: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.391: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.395: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.397: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.400: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.403: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.406: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:46.427: INFO: Lookups using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-mjp2z jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-mjp2z.svc] Jul 1 08:40:51.432: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.444: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.477: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.480: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.483: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.487: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.490: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.494: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:51.579: INFO: Lookups using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-mjp2z jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc] Jul 1 08:40:56.432: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.441: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.477: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.480: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.483: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.486: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.489: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.492: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:40:56.518: INFO: Lookups using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-mjp2z jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc] Jul 1 08:41:01.432: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.443: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.478: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.481: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.484: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.487: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.490: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.493: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:01.519: INFO: Lookups using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-mjp2z jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc] Jul 1 08:41:06.432: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.441: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.475: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.479: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.482: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.485: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.488: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.492: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:06.518: INFO: Lookups using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-mjp2z jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc] Jul 1 08:41:11.431: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.441: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.468: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.470: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.473: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.475: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.477: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.480: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc from pod e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018: the server could not find the requested resource (get pods dns-test-89e23b41-bb76-11ea-a133-0242ac110018) Jul 1 08:41:11.500: INFO: Lookups using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-mjp2z jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z jessie_udp@dns-test-service.e2e-tests-dns-mjp2z.svc jessie_tcp@dns-test-service.e2e-tests-dns-mjp2z.svc] Jul 1 08:41:16.512: INFO: DNS probes using e2e-tests-dns-mjp2z/dns-test-89e23b41-bb76-11ea-a133-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:41:16.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-mjp2z" for this suite. Jul 1 08:41:23.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:41:23.035: INFO: namespace: e2e-tests-dns-mjp2z, resource: bindings, ignored listing per whitelist Jul 1 08:41:23.092: INFO: namespace e2e-tests-dns-mjp2z deletion completed in 6.168254522s • [SLOW TEST:45.070 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:41:23.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:41:23.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-qpjqd" to be "success or failure" Jul 1 08:41:23.271: INFO: Pod "downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 55.983011ms Jul 1 08:41:25.276: INFO: Pod "downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060391035s Jul 1 08:41:27.281: INFO: Pod "downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065389606s STEP: Saw pod success Jul 1 08:41:27.281: INFO: Pod "downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:41:27.284: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:41:27.304: INFO: Waiting for pod downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018 to disappear Jul 1 08:41:27.322: INFO: Pod downwardapi-volume-a4b1a304-bb76-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:41:27.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qpjqd" for this suite. Jul 1 08:41:33.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:41:33.435: INFO: namespace: e2e-tests-projected-qpjqd, resource: bindings, ignored listing per whitelist Jul 1 08:41:33.445: INFO: namespace e2e-tests-projected-qpjqd deletion completed in 6.117665911s • [SLOW TEST:10.353 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:41:33.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 1 08:41:33.584: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 1 08:41:38.589: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:41:39.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-xldzx" for this suite. Jul 1 08:41:45.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:41:45.849: INFO: namespace: e2e-tests-replication-controller-xldzx, resource: bindings, ignored listing per whitelist Jul 1 08:41:45.866: INFO: namespace e2e-tests-replication-controller-xldzx deletion completed in 6.226264546s • [SLOW TEST:12.420 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:41:45.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 1 08:41:46.136: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:41:54.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-lbvtk" for this suite. Jul 1 08:42:16.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:42:16.285: INFO: namespace: e2e-tests-init-container-lbvtk, resource: bindings, ignored listing per whitelist Jul 1 08:42:16.297: INFO: namespace e2e-tests-init-container-lbvtk deletion completed in 22.163046363s • [SLOW TEST:30.431 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:42:16.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:42:16.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-qlrjf" to be "success or failure" Jul 1 08:42:16.502: INFO: Pod "downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.584047ms Jul 1 08:42:18.506: INFO: Pod "downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023371595s Jul 1 08:42:20.510: INFO: Pod "downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026866914s STEP: Saw pod success Jul 1 08:42:20.510: INFO: Pod "downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:42:20.512: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:42:20.668: INFO: Waiting for pod downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018 to disappear Jul 1 08:42:20.681: INFO: Pod downwardapi-volume-c4752b18-bb76-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:42:20.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qlrjf" for this suite. Jul 1 08:42:26.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:42:26.794: INFO: namespace: e2e-tests-projected-qlrjf, resource: bindings, ignored listing per whitelist Jul 1 08:42:26.811: INFO: namespace e2e-tests-projected-qlrjf deletion completed in 6.126416034s • [SLOW TEST:10.514 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:42:26.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jul 1 08:42:26.920: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jul 1 08:42:26.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:27.303: INFO: stderr: "" Jul 1 08:42:27.303: INFO: stdout: "service/redis-slave created\n" Jul 1 08:42:27.303: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jul 1 08:42:27.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:28.851: INFO: stderr: "" Jul 1 08:42:28.851: INFO: stdout: "service/redis-master created\n" Jul 1 08:42:28.851: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 1 08:42:28.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:29.254: INFO: stderr: "" Jul 1 08:42:29.254: INFO: stdout: "service/frontend created\n" Jul 1 08:42:29.255: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jul 1 08:42:29.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:29.552: INFO: stderr: "" Jul 1 08:42:29.552: INFO: stdout: "deployment.extensions/frontend created\n" Jul 1 08:42:29.552: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 1 08:42:29.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:29.943: INFO: stderr: "" Jul 1 08:42:29.943: INFO: stdout: "deployment.extensions/redis-master created\n" Jul 1 08:42:29.943: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jul 1 08:42:29.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:30.304: INFO: stderr: "" Jul 1 08:42:30.304: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jul 1 08:42:30.304: INFO: Waiting for all frontend pods to be Running. Jul 1 08:42:45.355: INFO: Waiting for frontend to serve content. Jul 1 08:42:46.088: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: Jul 1 08:42:51.545: INFO: Trying to add a new entry to the guestbook. Jul 1 08:42:52.159: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 1 08:42:52.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:52.880: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:42:52.880: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jul 1 08:42:52.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:53.498: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:42:53.498: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 08:42:53.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:53.662: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:42:53.662: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 08:42:53.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:53.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:42:53.805: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 08:42:53.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:54.290: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:42:54.290: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 08:42:54.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4h5w6' Jul 1 08:42:54.827: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:42:54.827: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:42:54.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4h5w6" for this suite. Jul 1 08:43:47.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:43:47.571: INFO: namespace: e2e-tests-kubectl-4h5w6, resource: bindings, ignored listing per whitelist Jul 1 08:43:47.622: INFO: namespace e2e-tests-kubectl-4h5w6 deletion completed in 52.696312518s • [SLOW TEST:80.811 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:43:47.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-8jvnl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8jvnl to expose endpoints map[] Jul 1 08:43:47.895: INFO: Get endpoints failed (56.822357ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 1 08:43:48.904: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8jvnl exposes endpoints map[] (1.065631913s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-8jvnl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8jvnl to expose endpoints map[pod1:[100]] Jul 1 08:43:53.109: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8jvnl exposes endpoints map[pod1:[100]] (4.199267389s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-8jvnl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8jvnl to expose endpoints map[pod1:[100] pod2:[101]] Jul 1 08:43:56.255: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8jvnl exposes endpoints map[pod2:[101] pod1:[100]] (3.143607069s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-8jvnl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8jvnl to expose endpoints map[pod2:[101]] Jul 1 08:43:57.351: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8jvnl exposes endpoints map[pod2:[101]] (1.092242492s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-8jvnl STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8jvnl to expose endpoints map[] Jul 1 08:43:58.379: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8jvnl exposes endpoints map[] (1.023451733s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:43:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8jvnl" for this suite. Jul 1 08:44:22.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:44:22.493: INFO: namespace: e2e-tests-services-8jvnl, resource: bindings, ignored listing per whitelist Jul 1 08:44:22.555: INFO: namespace e2e-tests-services-8jvnl deletion completed in 24.08453329s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:34.933 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:44:22.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jul 1 08:44:23.390: INFO: Waiting up to 5m0s for pod "client-containers-10024656-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-containers-m4r9h" to be "success or failure" Jul 1 08:44:23.458: INFO: Pod "client-containers-10024656-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 67.749965ms Jul 1 08:44:25.461: INFO: Pod "client-containers-10024656-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071049455s Jul 1 08:44:27.655: INFO: Pod "client-containers-10024656-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.265278597s STEP: Saw pod success Jul 1 08:44:27.655: INFO: Pod "client-containers-10024656-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:44:27.709: INFO: Trying to get logs from node hunter-worker pod client-containers-10024656-bb77-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:44:27.920: INFO: Waiting for pod client-containers-10024656-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:44:27.966: INFO: Pod client-containers-10024656-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:44:27.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-m4r9h" for this suite. Jul 1 08:44:34.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:44:34.138: INFO: namespace: e2e-tests-containers-m4r9h, resource: bindings, ignored listing per whitelist Jul 1 08:44:34.186: INFO: namespace e2e-tests-containers-m4r9h deletion completed in 6.217944114s • [SLOW TEST:11.631 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:44:34.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 1 08:44:34.350: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xppwx,SelfLink:/api/v1/namespaces/e2e-tests-watch-xppwx/configmaps/e2e-watch-test-label-changed,UID:169d09ec-bb77-11ea-99e8-0242ac110002,ResourceVersion:18832621,Generation:0,CreationTimestamp:2020-07-01 08:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 08:44:34.351: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xppwx,SelfLink:/api/v1/namespaces/e2e-tests-watch-xppwx/configmaps/e2e-watch-test-label-changed,UID:169d09ec-bb77-11ea-99e8-0242ac110002,ResourceVersion:18832622,Generation:0,CreationTimestamp:2020-07-01 08:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 1 08:44:34.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xppwx,SelfLink:/api/v1/namespaces/e2e-tests-watch-xppwx/configmaps/e2e-watch-test-label-changed,UID:169d09ec-bb77-11ea-99e8-0242ac110002,ResourceVersion:18832623,Generation:0,CreationTimestamp:2020-07-01 08:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 1 08:44:44.389: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xppwx,SelfLink:/api/v1/namespaces/e2e-tests-watch-xppwx/configmaps/e2e-watch-test-label-changed,UID:169d09ec-bb77-11ea-99e8-0242ac110002,ResourceVersion:18832644,Generation:0,CreationTimestamp:2020-07-01 08:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 08:44:44.389: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xppwx,SelfLink:/api/v1/namespaces/e2e-tests-watch-xppwx/configmaps/e2e-watch-test-label-changed,UID:169d09ec-bb77-11ea-99e8-0242ac110002,ResourceVersion:18832645,Generation:0,CreationTimestamp:2020-07-01 08:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 1 08:44:44.390: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xppwx,SelfLink:/api/v1/namespaces/e2e-tests-watch-xppwx/configmaps/e2e-watch-test-label-changed,UID:169d09ec-bb77-11ea-99e8-0242ac110002,ResourceVersion:18832646,Generation:0,CreationTimestamp:2020-07-01 08:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:44:44.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-xppwx" for this suite. Jul 1 08:44:50.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:44:50.528: INFO: namespace: e2e-tests-watch-xppwx, resource: bindings, ignored listing per whitelist Jul 1 08:44:50.551: INFO: namespace e2e-tests-watch-xppwx deletion completed in 6.138841335s • [SLOW TEST:16.364 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:44:50.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2068e8c0-bb77-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:44:50.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-ng62z" to be "success or failure" Jul 1 08:44:50.813: INFO: Pod "pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.783858ms Jul 1 08:44:52.927: INFO: Pod "pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131332192s Jul 1 08:44:54.931: INFO: Pod "pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135237879s Jul 1 08:44:56.934: INFO: Pod "pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138886786s STEP: Saw pod success Jul 1 08:44:56.934: INFO: Pod "pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:44:56.937: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 08:44:56.962: INFO: Waiting for pod pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:44:56.979: INFO: Pod pod-configmaps-206d5557-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:44:56.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ng62z" for this suite. Jul 1 08:45:03.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:45:03.398: INFO: namespace: e2e-tests-configmap-ng62z, resource: bindings, ignored listing per whitelist Jul 1 08:45:03.409: INFO: namespace e2e-tests-configmap-ng62z deletion completed in 6.426433589s • [SLOW TEST:12.858 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:45:03.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:45:03.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-pz988" to be "success or failure" Jul 1 08:45:03.644: INFO: Pod "downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.76062ms Jul 1 08:45:05.647: INFO: Pod "downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01887495s Jul 1 08:45:07.650: INFO: Pod "downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.022043666s Jul 1 08:45:09.654: INFO: Pod "downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025718861s STEP: Saw pod success Jul 1 08:45:09.654: INFO: Pod "downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:45:09.657: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:45:09.731: INFO: Waiting for pod downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:45:09.742: INFO: Pod downwardapi-volume-28158172-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:45:09.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pz988" for this suite. Jul 1 08:45:15.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:45:15.782: INFO: namespace: e2e-tests-downward-api-pz988, resource: bindings, ignored listing per whitelist Jul 1 08:45:15.833: INFO: namespace e2e-tests-downward-api-pz988 deletion completed in 6.088091742s • [SLOW TEST:12.424 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:45:15.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:45:16.994: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2fe3fbdc-bb77-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001f18272), BlockOwnerDeletion:(*bool)(0xc001f18273)}} Jul 1 08:45:17.012: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2fc429b7-bb77-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0020abde2), BlockOwnerDeletion:(*bool)(0xc0020abde3)}} Jul 1 08:45:17.216: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2fc4d8fe-bb77-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001f18442), BlockOwnerDeletion:(*bool)(0xc001f18443)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:45:22.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-k4jt4" for this suite. Jul 1 08:45:28.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:45:28.391: INFO: namespace: e2e-tests-gc-k4jt4, resource: bindings, ignored listing per whitelist Jul 1 08:45:28.461: INFO: namespace e2e-tests-gc-k4jt4 deletion completed in 6.211829409s • [SLOW TEST:12.624 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:45:28.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6q77c STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 08:45:29.127: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 08:45:53.777: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6q77c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 08:45:53.777: INFO: >>> kubeConfig: /root/.kube/config I0701 08:45:53.813504 6 log.go:172] (0xc000a0a4d0) (0xc0000fde00) Create stream I0701 08:45:53.813530 6 log.go:172] (0xc000a0a4d0) (0xc0000fde00) Stream added, broadcasting: 1 I0701 08:45:53.815744 6 log.go:172] (0xc000a0a4d0) Reply frame received for 1 I0701 08:45:53.815790 6 log.go:172] (0xc000a0a4d0) (0xc001d2ff40) Create stream I0701 08:45:53.815807 6 log.go:172] (0xc000a0a4d0) (0xc001d2ff40) Stream added, broadcasting: 3 I0701 08:45:53.816915 6 log.go:172] (0xc000a0a4d0) Reply frame received for 3 I0701 08:45:53.816971 6 log.go:172] (0xc000a0a4d0) (0xc000707540) Create stream I0701 08:45:53.816989 6 log.go:172] (0xc000a0a4d0) (0xc000707540) Stream added, broadcasting: 5 I0701 08:45:53.817936 6 log.go:172] (0xc000a0a4d0) Reply frame received for 5 I0701 08:45:54.900931 6 log.go:172] (0xc000a0a4d0) Data frame received for 3 I0701 08:45:54.900978 6 log.go:172] (0xc001d2ff40) (3) Data frame handling I0701 08:45:54.901006 6 log.go:172] (0xc001d2ff40) (3) Data frame sent I0701 08:45:54.901022 6 log.go:172] (0xc000a0a4d0) Data frame received for 3 I0701 08:45:54.901029 6 log.go:172] (0xc001d2ff40) (3) Data frame handling I0701 08:45:54.901074 6 log.go:172] (0xc000a0a4d0) Data frame received for 5 I0701 08:45:54.901107 6 log.go:172] (0xc000707540) (5) Data frame handling I0701 08:45:54.903685 6 log.go:172] (0xc000a0a4d0) Data frame received for 1 I0701 08:45:54.903719 6 log.go:172] (0xc0000fde00) (1) Data frame handling I0701 08:45:54.903757 6 log.go:172] (0xc0000fde00) (1) Data frame sent I0701 08:45:54.903782 6 log.go:172] (0xc000a0a4d0) (0xc0000fde00) Stream removed, broadcasting: 1 I0701 08:45:54.903884 6 log.go:172] (0xc000a0a4d0) Go away received I0701 08:45:54.903909 6 log.go:172] (0xc000a0a4d0) (0xc0000fde00) Stream removed, broadcasting: 1 I0701 08:45:54.903923 6 log.go:172] (0xc000a0a4d0) (0xc001d2ff40) Stream removed, broadcasting: 3 I0701 08:45:54.903933 6 log.go:172] (0xc000a0a4d0) (0xc000707540) Stream removed, broadcasting: 5 Jul 1 08:45:54.903: INFO: Found all expected endpoints: [netserver-0] Jul 1 08:45:54.907: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.95 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6q77c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 08:45:54.907: INFO: >>> kubeConfig: /root/.kube/config I0701 08:45:54.940292 6 log.go:172] (0xc001dc0210) (0xc000707d60) Create stream I0701 08:45:54.940317 6 log.go:172] (0xc001dc0210) (0xc000707d60) Stream added, broadcasting: 1 I0701 08:45:54.948753 6 log.go:172] (0xc001dc0210) Reply frame received for 1 I0701 08:45:54.948805 6 log.go:172] (0xc001dc0210) (0xc000370780) Create stream I0701 08:45:54.948818 6 log.go:172] (0xc001dc0210) (0xc000370780) Stream added, broadcasting: 3 I0701 08:45:54.950638 6 log.go:172] (0xc001dc0210) Reply frame received for 3 I0701 08:45:54.950685 6 log.go:172] (0xc001dc0210) (0xc000350b40) Create stream I0701 08:45:54.950701 6 log.go:172] (0xc001dc0210) (0xc000350b40) Stream added, broadcasting: 5 I0701 08:45:54.952611 6 log.go:172] (0xc001dc0210) Reply frame received for 5 I0701 08:45:56.039954 6 log.go:172] (0xc001dc0210) Data frame received for 5 I0701 08:45:56.040012 6 log.go:172] (0xc000350b40) (5) Data frame handling I0701 08:45:56.040070 6 log.go:172] (0xc001dc0210) Data frame received for 3 I0701 08:45:56.040099 6 log.go:172] (0xc000370780) (3) Data frame handling I0701 08:45:56.040121 6 log.go:172] (0xc000370780) (3) Data frame sent I0701 08:45:56.040237 6 log.go:172] (0xc001dc0210) Data frame received for 3 I0701 08:45:56.040274 6 log.go:172] (0xc000370780) (3) Data frame handling I0701 08:45:56.042365 6 log.go:172] (0xc001dc0210) Data frame received for 1 I0701 08:45:56.042409 6 log.go:172] (0xc000707d60) (1) Data frame handling I0701 08:45:56.042445 6 log.go:172] (0xc000707d60) (1) Data frame sent I0701 08:45:56.042470 6 log.go:172] (0xc001dc0210) (0xc000707d60) Stream removed, broadcasting: 1 I0701 08:45:56.042506 6 log.go:172] (0xc001dc0210) Go away received I0701 08:45:56.042672 6 log.go:172] (0xc001dc0210) (0xc000707d60) Stream removed, broadcasting: 1 I0701 08:45:56.042705 6 log.go:172] (0xc001dc0210) (0xc000370780) Stream removed, broadcasting: 3 I0701 08:45:56.042726 6 log.go:172] (0xc001dc0210) (0xc000350b40) Stream removed, broadcasting: 5 Jul 1 08:45:56.042: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:45:56.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6q77c" for this suite. Jul 1 08:46:22.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:46:22.126: INFO: namespace: e2e-tests-pod-network-test-6q77c, resource: bindings, ignored listing per whitelist Jul 1 08:46:22.208: INFO: namespace e2e-tests-pod-network-test-6q77c deletion completed in 26.160742691s • [SLOW TEST:53.746 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:46:22.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 08:46:26.885: INFO: Successfully updated pod "annotationupdate56fca09d-bb77-11ea-a133-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:46:30.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fcsbk" for this suite. Jul 1 08:46:54.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:46:55.006: INFO: namespace: e2e-tests-downward-api-fcsbk, resource: bindings, ignored listing per whitelist Jul 1 08:46:55.056: INFO: namespace e2e-tests-downward-api-fcsbk deletion completed in 24.136764835s • [SLOW TEST:32.848 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:46:55.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 1 08:46:55.218: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:47:01.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-5tqm6" for this suite. Jul 1 08:47:07.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:47:07.936: INFO: namespace: e2e-tests-init-container-5tqm6, resource: bindings, ignored listing per whitelist Jul 1 08:47:07.987: INFO: namespace e2e-tests-init-container-5tqm6 deletion completed in 6.104763982s • [SLOW TEST:12.931 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:47:07.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0701 08:47:09.224608 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 08:47:09.224: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:47:09.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xmxsv" for this suite. Jul 1 08:47:15.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:47:15.408: INFO: namespace: e2e-tests-gc-xmxsv, resource: bindings, ignored listing per whitelist Jul 1 08:47:15.418: INFO: namespace e2e-tests-gc-xmxsv deletion completed in 6.190927403s • [SLOW TEST:7.430 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:47:15.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-76c2cd6b-bb77-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:47:15.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-5lwsq" to be "success or failure" Jul 1 08:47:15.725: INFO: Pod "pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 61.416348ms Jul 1 08:47:17.809: INFO: Pod "pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145229334s Jul 1 08:47:19.814: INFO: Pod "pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149508187s Jul 1 08:47:21.818: INFO: Pod "pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1535901s STEP: Saw pod success Jul 1 08:47:21.818: INFO: Pod "pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:47:21.821: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 08:47:21.884: INFO: Waiting for pod pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:47:21.905: INFO: Pod pod-configmaps-76c57d73-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:47:21.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5lwsq" for this suite. Jul 1 08:47:27.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:47:27.984: INFO: namespace: e2e-tests-configmap-5lwsq, resource: bindings, ignored listing per whitelist Jul 1 08:47:27.999: INFO: namespace e2e-tests-configmap-5lwsq deletion completed in 6.089777469s • [SLOW TEST:12.581 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:47:28.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 08:47:28.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-csqhj" to be "success or failure" Jul 1 08:47:28.139: INFO: Pod "downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.963131ms Jul 1 08:47:30.144: INFO: Pod "downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00808154s Jul 1 08:47:32.149: INFO: Pod "downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013176387s STEP: Saw pod success Jul 1 08:47:32.149: INFO: Pod "downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:47:32.152: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 08:47:32.472: INFO: Waiting for pod downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:47:32.493: INFO: Pod downwardapi-volume-7e376ae5-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:47:32.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-csqhj" for this suite. Jul 1 08:47:38.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:47:38.579: INFO: namespace: e2e-tests-downward-api-csqhj, resource: bindings, ignored listing per whitelist Jul 1 08:47:38.653: INFO: namespace e2e-tests-downward-api-csqhj deletion completed in 6.156712176s • [SLOW TEST:10.654 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:47:38.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-848c8da5-bb77-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 08:47:38.778: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-759k4" to be "success or failure" Jul 1 08:47:38.781: INFO: Pod "pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443913ms Jul 1 08:47:40.786: INFO: Pod "pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007705986s Jul 1 08:47:42.789: INFO: Pod "pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011541535s STEP: Saw pod success Jul 1 08:47:42.790: INFO: Pod "pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:47:42.792: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jul 1 08:47:42.828: INFO: Waiting for pod pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:47:42.836: INFO: Pod pod-projected-secrets-848f3a0d-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:47:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-759k4" for this suite. Jul 1 08:47:48.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:47:48.919: INFO: namespace: e2e-tests-projected-759k4, resource: bindings, ignored listing per whitelist Jul 1 08:47:48.927: INFO: namespace e2e-tests-projected-759k4 deletion completed in 6.088367658s • [SLOW TEST:10.274 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:47:48.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-96x9n STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 08:47:49.024: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 08:48:15.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.7:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-96x9n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 08:48:15.233: INFO: >>> kubeConfig: /root/.kube/config I0701 08:48:15.270170 6 log.go:172] (0xc001dc04d0) (0xc0024bd400) Create stream I0701 08:48:15.270206 6 log.go:172] (0xc001dc04d0) (0xc0024bd400) Stream added, broadcasting: 1 I0701 08:48:15.272492 6 log.go:172] (0xc001dc04d0) Reply frame received for 1 I0701 08:48:15.272535 6 log.go:172] (0xc001dc04d0) (0xc001b9fae0) Create stream I0701 08:48:15.272547 6 log.go:172] (0xc001dc04d0) (0xc001b9fae0) Stream added, broadcasting: 3 I0701 08:48:15.273612 6 log.go:172] (0xc001dc04d0) Reply frame received for 3 I0701 08:48:15.273647 6 log.go:172] (0xc001dc04d0) (0xc0024bd4a0) Create stream I0701 08:48:15.273664 6 log.go:172] (0xc001dc04d0) (0xc0024bd4a0) Stream added, broadcasting: 5 I0701 08:48:15.274731 6 log.go:172] (0xc001dc04d0) Reply frame received for 5 I0701 08:48:15.384225 6 log.go:172] (0xc001dc04d0) Data frame received for 3 I0701 08:48:15.384246 6 log.go:172] (0xc001b9fae0) (3) Data frame handling I0701 08:48:15.384254 6 log.go:172] (0xc001b9fae0) (3) Data frame sent I0701 08:48:15.384258 6 log.go:172] (0xc001dc04d0) Data frame received for 3 I0701 08:48:15.384262 6 log.go:172] (0xc001b9fae0) (3) Data frame handling I0701 08:48:15.384275 6 log.go:172] (0xc001dc04d0) Data frame received for 5 I0701 08:48:15.384296 6 log.go:172] (0xc0024bd4a0) (5) Data frame handling I0701 08:48:15.386362 6 log.go:172] (0xc001dc04d0) Data frame received for 1 I0701 08:48:15.386376 6 log.go:172] (0xc0024bd400) (1) Data frame handling I0701 08:48:15.386389 6 log.go:172] (0xc0024bd400) (1) Data frame sent I0701 08:48:15.386402 6 log.go:172] (0xc001dc04d0) (0xc0024bd400) Stream removed, broadcasting: 1 I0701 08:48:15.386421 6 log.go:172] (0xc001dc04d0) Go away received I0701 08:48:15.386587 6 log.go:172] (0xc001dc04d0) (0xc0024bd400) Stream removed, broadcasting: 1 I0701 08:48:15.386613 6 log.go:172] (0xc001dc04d0) (0xc001b9fae0) Stream removed, broadcasting: 3 I0701 08:48:15.386633 6 log.go:172] (0xc001dc04d0) (0xc0024bd4a0) Stream removed, broadcasting: 5 Jul 1 08:48:15.386: INFO: Found all expected endpoints: [netserver-0] Jul 1 08:48:15.421: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.102:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-96x9n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 08:48:15.421: INFO: >>> kubeConfig: /root/.kube/config I0701 08:48:15.455243 6 log.go:172] (0xc000e060b0) (0xc001b4afa0) Create stream I0701 08:48:15.455284 6 log.go:172] (0xc000e060b0) (0xc001b4afa0) Stream added, broadcasting: 1 I0701 08:48:15.457784 6 log.go:172] (0xc000e060b0) Reply frame received for 1 I0701 08:48:15.457837 6 log.go:172] (0xc000e060b0) (0xc0010321e0) Create stream I0701 08:48:15.457851 6 log.go:172] (0xc000e060b0) (0xc0010321e0) Stream added, broadcasting: 3 I0701 08:48:15.458693 6 log.go:172] (0xc000e060b0) Reply frame received for 3 I0701 08:48:15.458719 6 log.go:172] (0xc000e060b0) (0xc0018446e0) Create stream I0701 08:48:15.458730 6 log.go:172] (0xc000e060b0) (0xc0018446e0) Stream added, broadcasting: 5 I0701 08:48:15.459994 6 log.go:172] (0xc000e060b0) Reply frame received for 5 I0701 08:48:15.524584 6 log.go:172] (0xc000e060b0) Data frame received for 3 I0701 08:48:15.524608 6 log.go:172] (0xc0010321e0) (3) Data frame handling I0701 08:48:15.524638 6 log.go:172] (0xc0010321e0) (3) Data frame sent I0701 08:48:15.524890 6 log.go:172] (0xc000e060b0) Data frame received for 5 I0701 08:48:15.524906 6 log.go:172] (0xc0018446e0) (5) Data frame handling I0701 08:48:15.524947 6 log.go:172] (0xc000e060b0) Data frame received for 3 I0701 08:48:15.524970 6 log.go:172] (0xc0010321e0) (3) Data frame handling I0701 08:48:15.526501 6 log.go:172] (0xc000e060b0) Data frame received for 1 I0701 08:48:15.526522 6 log.go:172] (0xc001b4afa0) (1) Data frame handling I0701 08:48:15.526533 6 log.go:172] (0xc001b4afa0) (1) Data frame sent I0701 08:48:15.526546 6 log.go:172] (0xc000e060b0) (0xc001b4afa0) Stream removed, broadcasting: 1 I0701 08:48:15.526562 6 log.go:172] (0xc000e060b0) Go away received I0701 08:48:15.526694 6 log.go:172] (0xc000e060b0) (0xc001b4afa0) Stream removed, broadcasting: 1 I0701 08:48:15.526722 6 log.go:172] (0xc000e060b0) (0xc0010321e0) Stream removed, broadcasting: 3 I0701 08:48:15.526762 6 log.go:172] (0xc000e060b0) (0xc0018446e0) Stream removed, broadcasting: 5 Jul 1 08:48:15.526: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:48:15.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-96x9n" for this suite. Jul 1 08:48:39.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:48:39.583: INFO: namespace: e2e-tests-pod-network-test-96x9n, resource: bindings, ignored listing per whitelist Jul 1 08:48:39.639: INFO: namespace e2e-tests-pod-network-test-96x9n deletion completed in 24.108889749s • [SLOW TEST:50.711 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:48:39.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0701 08:48:52.974263 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 08:48:52.974: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:48:52.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-m4szm" for this suite. Jul 1 08:49:03.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:49:03.872: INFO: namespace: e2e-tests-gc-m4szm, resource: bindings, ignored listing per whitelist Jul 1 08:49:03.894: INFO: namespace e2e-tests-gc-m4szm deletion completed in 10.899279659s • [SLOW TEST:24.255 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:49:03.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-b75f85c3-bb77-11ea-a133-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-b75f863f-bb77-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b75f85c3-bb77-11ea-a133-0242ac110018 STEP: Updating configmap cm-test-opt-upd-b75f863f-bb77-11ea-a133-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-b75f8678-bb77-11ea-a133-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:50:24.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dxmwg" for this suite. Jul 1 08:50:48.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:50:48.700: INFO: namespace: e2e-tests-configmap-dxmwg, resource: bindings, ignored listing per whitelist Jul 1 08:50:48.716: INFO: namespace e2e-tests-configmap-dxmwg deletion completed in 24.088680615s • [SLOW TEST:104.822 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:50:48.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:50:52.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-75j9r" for this suite. Jul 1 08:50:58.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:50:59.028: INFO: namespace: e2e-tests-kubelet-test-75j9r, resource: bindings, ignored listing per whitelist Jul 1 08:50:59.066: INFO: namespace e2e-tests-kubelet-test-75j9r deletion completed in 6.222685584s • [SLOW TEST:10.350 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:50:59.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 1 08:50:59.244: INFO: Waiting up to 5m0s for pod "pod-fc0bbd44-bb77-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-d5q8g" to be "success or failure" Jul 1 08:50:59.247: INFO: Pod "pod-fc0bbd44-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17765ms Jul 1 08:51:01.252: INFO: Pod "pod-fc0bbd44-bb77-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00727312s Jul 1 08:51:03.255: INFO: Pod "pod-fc0bbd44-bb77-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01065701s STEP: Saw pod success Jul 1 08:51:03.255: INFO: Pod "pod-fc0bbd44-bb77-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:51:03.257: INFO: Trying to get logs from node hunter-worker2 pod pod-fc0bbd44-bb77-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:51:03.521: INFO: Waiting for pod pod-fc0bbd44-bb77-11ea-a133-0242ac110018 to disappear Jul 1 08:51:03.839: INFO: Pod pod-fc0bbd44-bb77-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:51:03.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d5q8g" for this suite. Jul 1 08:51:10.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:51:10.302: INFO: namespace: e2e-tests-emptydir-d5q8g, resource: bindings, ignored listing per whitelist Jul 1 08:51:10.328: INFO: namespace e2e-tests-emptydir-d5q8g deletion completed in 6.486436362s • [SLOW TEST:11.262 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:51:10.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 08:51:11.786: INFO: Waiting up to 5m0s for pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-tvr4l" to be "success or failure" Jul 1 08:51:11.824: INFO: Pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.82536ms Jul 1 08:51:13.828: INFO: Pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042359608s Jul 1 08:51:16.967: INFO: Pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.180801917s Jul 1 08:51:18.979: INFO: Pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 7.192899256s Jul 1 08:51:20.983: INFO: Pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.196783964s STEP: Saw pod success Jul 1 08:51:20.983: INFO: Pod "downward-api-036f5b73-bb78-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:51:20.986: INFO: Trying to get logs from node hunter-worker pod downward-api-036f5b73-bb78-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 08:51:21.010: INFO: Waiting for pod downward-api-036f5b73-bb78-11ea-a133-0242ac110018 to disappear Jul 1 08:51:21.117: INFO: Pod downward-api-036f5b73-bb78-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:51:21.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tvr4l" for this suite. Jul 1 08:51:27.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:51:27.288: INFO: namespace: e2e-tests-downward-api-tvr4l, resource: bindings, ignored listing per whitelist Jul 1 08:51:27.327: INFO: namespace e2e-tests-downward-api-tvr4l deletion completed in 6.205829685s • [SLOW TEST:16.999 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:51:27.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 1 08:51:27.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6m2m7' Jul 1 08:51:31.633: INFO: stderr: "" Jul 1 08:51:31.633: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 1 08:51:32.778: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:51:32.778: INFO: Found 0 / 1 Jul 1 08:51:33.796: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:51:33.796: INFO: Found 0 / 1 Jul 1 08:51:34.637: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:51:34.637: INFO: Found 0 / 1 Jul 1 08:51:35.639: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:51:35.639: INFO: Found 1 / 1 Jul 1 08:51:35.639: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 1 08:51:35.643: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:51:35.643: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 08:51:35.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wpwkw --namespace=e2e-tests-kubectl-6m2m7 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 1 08:51:35.752: INFO: stderr: "" Jul 1 08:51:35.752: INFO: stdout: "pod/redis-master-wpwkw patched\n" STEP: checking annotations Jul 1 08:51:35.764: INFO: Selector matched 1 pods for map[app:redis] Jul 1 08:51:35.764: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:51:35.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6m2m7" for this suite. Jul 1 08:51:57.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:51:57.866: INFO: namespace: e2e-tests-kubectl-6m2m7, resource: bindings, ignored listing per whitelist Jul 1 08:51:57.874: INFO: namespace e2e-tests-kubectl-6m2m7 deletion completed in 22.106009873s • [SLOW TEST:30.546 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:51:57.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 1 08:52:05.028: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:52:06.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-6j62g" for this suite. Jul 1 08:52:26.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:52:26.122: INFO: namespace: e2e-tests-replicaset-6j62g, resource: bindings, ignored listing per whitelist Jul 1 08:52:26.153: INFO: namespace e2e-tests-replicaset-6j62g deletion completed in 20.098077265s • [SLOW TEST:28.279 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:52:26.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-2fe9818c-bb78-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:52:26.276: INFO: Waiting up to 5m0s for pod "pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-wj26t" to be "success or failure" Jul 1 08:52:26.286: INFO: Pod "pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271387ms Jul 1 08:52:28.291: INFO: Pod "pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014615883s Jul 1 08:52:30.294: INFO: Pod "pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018276617s STEP: Saw pod success Jul 1 08:52:30.294: INFO: Pod "pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:52:30.297: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 08:52:30.362: INFO: Waiting for pod pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018 to disappear Jul 1 08:52:30.640: INFO: Pod pod-configmaps-2fea18a0-bb78-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:52:30.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wj26t" for this suite. Jul 1 08:52:36.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:52:36.726: INFO: namespace: e2e-tests-configmap-wj26t, resource: bindings, ignored listing per whitelist Jul 1 08:52:36.747: INFO: namespace e2e-tests-configmap-wj26t deletion completed in 6.102354058s • [SLOW TEST:10.594 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:52:36.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:52:36.885: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 1 08:52:41.888: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 08:52:41.889: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 1 08:52:42.103: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-6fmmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6fmmc/deployments/test-cleanup-deployment,UID:3942f94a-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834383,Generation:1,CreationTimestamp:2020-07-01 08:52:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jul 1 08:52:42.181: INFO: New ReplicaSet "test-cleanup-deployment-6df768c57" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57,GenerateName:,Namespace:e2e-tests-deployment-6fmmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6fmmc/replicasets/test-cleanup-deployment-6df768c57,UID:395c4996-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834385,Generation:1,CreationTimestamp:2020-07-01 08:52:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3942f94a-bb78-11ea-99e8-0242ac110002 0xc001c76860 0xc001c76861}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 08:52:42.181: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 1 08:52:42.182: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-6fmmc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6fmmc/replicasets/test-cleanup-controller,UID:363cc23c-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834384,Generation:1,CreationTimestamp:2020-07-01 08:52:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3942f94a-bb78-11ea-99e8-0242ac110002 0xc001c767a7 0xc001c767a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 1 08:52:42.313: INFO: Pod "test-cleanup-controller-65rj6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-65rj6,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-6fmmc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6fmmc/pods/test-cleanup-controller-65rj6,UID:36409512-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834379,Generation:0,CreationTimestamp:2020-07-01 08:52:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 363cc23c-bb78-11ea-99e8-0242ac110002 0xc001c772f7 0xc001c772f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zsf2w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zsf2w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zsf2w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c773d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c773f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:52:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:52:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:52:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:52:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.113,StartTime:2020-07-01 08:52:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:52:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a8a6a7ced261d23b232bb2e95d587a18090afa09e58b4605242d8ef0bf1592c8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:52:42.313: INFO: Pod "test-cleanup-deployment-6df768c57-m7r28" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57-m7r28,GenerateName:test-cleanup-deployment-6df768c57-,Namespace:e2e-tests-deployment-6fmmc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6fmmc/pods/test-cleanup-deployment-6df768c57-m7r28,UID:396452a2-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834392,Generation:0,CreationTimestamp:2020-07-01 08:52:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-6df768c57 395c4996-bb78-11ea-99e8-0242ac110002 0xc001c774c0 0xc001c774c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zsf2w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zsf2w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zsf2w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c77530} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c77550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:52:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:52:42.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6fmmc" for this suite. Jul 1 08:52:50.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:52:50.449: INFO: namespace: e2e-tests-deployment-6fmmc, resource: bindings, ignored listing per whitelist Jul 1 08:52:50.520: INFO: namespace e2e-tests-deployment-6fmmc deletion completed in 8.117848209s • [SLOW TEST:13.772 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:52:50.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 08:52:57.213: INFO: Successfully updated pod "pod-update-3e7403f3-bb78-11ea-a133-0242ac110018" STEP: verifying the updated pod is in kubernetes Jul 1 08:52:57.255: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:52:57.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-j2c6s" for this suite. Jul 1 08:53:19.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:53:19.343: INFO: namespace: e2e-tests-pods-j2c6s, resource: bindings, ignored listing per whitelist Jul 1 08:53:19.354: INFO: namespace e2e-tests-pods-j2c6s deletion completed in 22.095968619s • [SLOW TEST:28.834 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:53:19.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:53:19.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 1 08:53:19.595: INFO: stderr: "" Jul 1 08:53:19.595: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-01T07:27:42Z\", GoVersion:\"go1.11.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:53:19.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wjvbs" for this suite. Jul 1 08:53:25.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:53:25.658: INFO: namespace: e2e-tests-kubectl-wjvbs, resource: bindings, ignored listing per whitelist Jul 1 08:53:25.699: INFO: namespace e2e-tests-kubectl-wjvbs deletion completed in 6.100036459s • [SLOW TEST:6.344 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:53:25.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 1 08:53:25.793: INFO: Waiting up to 5m0s for pod "pod-5363e638-bb78-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-v4dml" to be "success or failure" Jul 1 08:53:25.797: INFO: Pod "pod-5363e638-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019186ms Jul 1 08:53:27.822: INFO: Pod "pod-5363e638-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028938932s Jul 1 08:53:29.826: INFO: Pod "pod-5363e638-bb78-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033069411s STEP: Saw pod success Jul 1 08:53:29.826: INFO: Pod "pod-5363e638-bb78-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:53:29.829: INFO: Trying to get logs from node hunter-worker pod pod-5363e638-bb78-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:53:29.852: INFO: Waiting for pod pod-5363e638-bb78-11ea-a133-0242ac110018 to disappear Jul 1 08:53:29.856: INFO: Pod pod-5363e638-bb78-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:53:29.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v4dml" for this suite. Jul 1 08:53:35.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:53:35.956: INFO: namespace: e2e-tests-emptydir-v4dml, resource: bindings, ignored listing per whitelist Jul 1 08:53:36.038: INFO: namespace e2e-tests-emptydir-v4dml deletion completed in 6.179313214s • [SLOW TEST:10.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:53:36.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:53:42.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-f5qwf" for this suite. Jul 1 08:53:48.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:53:48.475: INFO: namespace: e2e-tests-namespaces-f5qwf, resource: bindings, ignored listing per whitelist Jul 1 08:53:48.510: INFO: namespace e2e-tests-namespaces-f5qwf deletion completed in 6.098477998s STEP: Destroying namespace "e2e-tests-nsdeletetest-cbh8x" for this suite. Jul 1 08:53:48.512: INFO: Namespace e2e-tests-nsdeletetest-cbh8x was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-krnn7" for this suite. Jul 1 08:53:54.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:53:54.593: INFO: namespace: e2e-tests-nsdeletetest-krnn7, resource: bindings, ignored listing per whitelist Jul 1 08:53:54.623: INFO: namespace e2e-tests-nsdeletetest-krnn7 deletion completed in 6.111553477s • [SLOW TEST:18.585 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:53:54.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 08:54:02.883: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:02.888: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:04.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:04.948: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:06.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:07.050: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:08.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:08.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:10.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:10.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:12.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:12.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:14.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:14.892: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:16.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:16.894: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:18.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:18.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:20.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:20.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:22.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:22.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:24.889: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:24.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:26.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:26.894: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:28.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:28.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:30.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:30.893: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 08:54:32.888: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 08:54:32.893: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:54:32.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cl44l" for this suite. Jul 1 08:54:54.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:54:54.983: INFO: namespace: e2e-tests-container-lifecycle-hook-cl44l, resource: bindings, ignored listing per whitelist Jul 1 08:54:55.006: INFO: namespace e2e-tests-container-lifecycle-hook-cl44l deletion completed in 22.108218425s • [SLOW TEST:60.382 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:54:55.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jul 1 08:54:55.160: INFO: Waiting up to 5m0s for pod "pod-88a8eb38-bb78-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-dkrlq" to be "success or failure" Jul 1 08:54:55.164: INFO: Pod "pod-88a8eb38-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089232ms Jul 1 08:54:57.206: INFO: Pod "pod-88a8eb38-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045610804s Jul 1 08:54:59.210: INFO: Pod "pod-88a8eb38-bb78-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050383328s STEP: Saw pod success Jul 1 08:54:59.211: INFO: Pod "pod-88a8eb38-bb78-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:54:59.213: INFO: Trying to get logs from node hunter-worker2 pod pod-88a8eb38-bb78-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 08:54:59.237: INFO: Waiting for pod pod-88a8eb38-bb78-11ea-a133-0242ac110018 to disappear Jul 1 08:54:59.242: INFO: Pod pod-88a8eb38-bb78-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:54:59.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dkrlq" for this suite. Jul 1 08:55:05.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:55:05.313: INFO: namespace: e2e-tests-emptydir-dkrlq, resource: bindings, ignored listing per whitelist Jul 1 08:55:05.331: INFO: namespace e2e-tests-emptydir-dkrlq deletion completed in 6.085213805s • [SLOW TEST:10.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:55:05.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:55:05.490: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.867587ms) Jul 1 08:55:05.493: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.714679ms) Jul 1 08:55:05.497: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.933239ms) Jul 1 08:55:05.501: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.877363ms) Jul 1 08:55:05.505: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.954375ms) Jul 1 08:55:05.509: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.827529ms) Jul 1 08:55:05.513: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.049947ms) Jul 1 08:55:05.517: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.392939ms) Jul 1 08:55:05.520: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.292063ms) Jul 1 08:55:05.523: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.131826ms) Jul 1 08:55:05.526: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.920766ms) Jul 1 08:55:05.529: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.339652ms) Jul 1 08:55:05.533: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.842455ms) Jul 1 08:55:05.536: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.093138ms) Jul 1 08:55:05.539: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.972164ms) Jul 1 08:55:05.542: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.768851ms) Jul 1 08:55:05.545: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.30562ms) Jul 1 08:55:05.548: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.050921ms) Jul 1 08:55:05.550: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.746962ms) Jul 1 08:55:05.553: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.81341ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:55:05.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-g5czd" for this suite. Jul 1 08:55:11.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:55:11.632: INFO: namespace: e2e-tests-proxy-g5czd, resource: bindings, ignored listing per whitelist Jul 1 08:55:11.677: INFO: namespace e2e-tests-proxy-g5czd deletion completed in 6.120807532s • [SLOW TEST:6.346 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:55:11.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:55:11.750: INFO: Creating deployment "nginx-deployment" Jul 1 08:55:11.767: INFO: Waiting for observed generation 1 Jul 1 08:55:14.771: INFO: Waiting for all required pods to come up Jul 1 08:55:14.977: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 1 08:55:25.219: INFO: Waiting for deployment "nginx-deployment" to complete Jul 1 08:55:25.228: INFO: Updating deployment "nginx-deployment" with a non-existent image Jul 1 08:55:25.234: INFO: Updating deployment nginx-deployment Jul 1 08:55:25.234: INFO: Waiting for observed generation 2 Jul 1 08:55:27.243: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 1 08:55:27.245: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 1 08:55:27.248: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jul 1 08:55:27.257: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 1 08:55:27.257: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 1 08:55:27.260: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jul 1 08:55:27.263: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jul 1 08:55:27.263: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jul 1 08:55:27.268: INFO: Updating deployment nginx-deployment Jul 1 08:55:27.268: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jul 1 08:55:27.310: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 1 08:55:27.348: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 1 08:55:27.644: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zh2nt/deployments/nginx-deployment,UID:928eb0a9-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835093,Generation:3,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-07-01 08:55:26 +0000 UTC 2020-07-01 08:55:11 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-07-01 08:55:27 +0000 UTC 2020-07-01 08:55:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jul 1 08:55:27.672: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zh2nt/replicasets/nginx-deployment-5c98f8fb5,UID:9a982473-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835134,Generation:3,CreationTimestamp:2020-07-01 08:55:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 928eb0a9-bb78-11ea-99e8-0242ac110002 0xc0029d87b7 0xc0029d87b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 08:55:27.672: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jul 1 08:55:27.673: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zh2nt/replicasets/nginx-deployment-85ddf47c5d,UID:9293c59f-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835119,Generation:3,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 928eb0a9-bb78-11ea-99e8-0242ac110002 0xc0029d8877 0xc0029d8878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jul 1 08:55:27.850: INFO: Pod "nginx-deployment-5c98f8fb5-5ntq9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5ntq9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-5ntq9,UID:9bdabc28-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835129,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42197 0xc002a42198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42210} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.850: INFO: Pod "nginx-deployment-5c98f8fb5-66nc9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-66nc9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-66nc9,UID:9a9b4377-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835071,Generation:0,CreationTimestamp:2020-07-01 08:55:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42317 0xc002a42318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42390} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a423b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-01 08:55:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.851: INFO: Pod "nginx-deployment-5c98f8fb5-8ck9r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8ck9r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-8ck9r,UID:9bdaccf8-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835123,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42477 0xc002a42478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42560} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.851: INFO: Pod "nginx-deployment-5c98f8fb5-9kv8z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9kv8z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-9kv8z,UID:9a98e344-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835045,Generation:0,CreationTimestamp:2020-07-01 08:55:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a425f7 0xc002a425f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-07-01 08:55:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.851: INFO: Pod "nginx-deployment-5c98f8fb5-dnjlw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dnjlw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-dnjlw,UID:9bd50944-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835097,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42757 0xc002a42758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a427d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a427f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.851: INFO: Pod "nginx-deployment-5c98f8fb5-kn4d4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kn4d4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-kn4d4,UID:9bd5ea3c-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835111,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42867 0xc002a42868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a428e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.852: INFO: Pod "nginx-deployment-5c98f8fb5-mb8hf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mb8hf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-mb8hf,UID:9ab5076f-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835074,Generation:0,CreationTimestamp:2020-07-01 08:55:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42977 0xc002a42978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a429f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-07-01 08:55:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.852: INFO: Pod "nginx-deployment-5c98f8fb5-qsb8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qsb8v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-qsb8v,UID:9ac2b749-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835077,Generation:0,CreationTimestamp:2020-07-01 08:55:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42ad7 0xc002a42ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-01 08:55:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.852: INFO: Pod "nginx-deployment-5c98f8fb5-r5t87" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r5t87,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-r5t87,UID:9bdad3b1-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835130,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42c37 0xc002a42c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.852: INFO: Pod "nginx-deployment-5c98f8fb5-rw67t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rw67t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-rw67t,UID:9bd67893-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835104,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42d47 0xc002a42d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.852: INFO: Pod "nginx-deployment-5c98f8fb5-wxqgl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wxqgl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-wxqgl,UID:9bdabda6-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835122,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42e57 0xc002a42e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a42ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.852: INFO: Pod "nginx-deployment-5c98f8fb5-xfktb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xfktb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-xfktb,UID:9a9b4656-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835057,Generation:0,CreationTimestamp:2020-07-01 08:55:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a42f67 0xc002a42f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a42fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-01 08:55:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.853: INFO: Pod "nginx-deployment-5c98f8fb5-z22jc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z22jc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-5c98f8fb5-z22jc,UID:9be8efd8-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835131,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9a982473-bb78-11ea-99e8-0242ac110002 0xc002a430c7 0xc002a430c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43140} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.853: INFO: Pod "nginx-deployment-85ddf47c5d-26zl8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-26zl8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-26zl8,UID:9bdb57a2-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835125,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a431d7 0xc002a431d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43250} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.853: INFO: Pod "nginx-deployment-85ddf47c5d-2zflt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2zflt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-2zflt,UID:929d4cbb-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834994,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a432e7 0xc002a432e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.23,StartTime:2020-07-01 08:55:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a1b0f1a6b75013529330f8e5cdc1d62c5e4756375e68a7bdaf6577cd87566865}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.853: INFO: Pod "nginx-deployment-85ddf47c5d-5mj74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5mj74,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-5mj74,UID:9bdb3d94-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835124,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43447 0xc002a43448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a434c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a434e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.854: INFO: Pod "nginx-deployment-85ddf47c5d-94qp8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-94qp8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-94qp8,UID:9bd6a706-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835108,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43557 0xc002a43558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a435d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a435f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.854: INFO: Pod "nginx-deployment-85ddf47c5d-bkhsz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bkhsz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-bkhsz,UID:9bd68a8b-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835105,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43667 0xc002a43668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a436e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.854: INFO: Pod "nginx-deployment-85ddf47c5d-fxpqd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fxpqd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-fxpqd,UID:9bdb5447-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835127,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43777 0xc002a43778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a437f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.854: INFO: Pod "nginx-deployment-85ddf47c5d-gc2jk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gc2jk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-gc2jk,UID:92a50724-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835015,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43887 0xc002a43888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.120,StartTime:2020-07-01 08:55:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a53d5bc3f281da366b2804a02e5603bfa5038bfe461378a70621c0d87905965d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.855: INFO: Pod "nginx-deployment-85ddf47c5d-j7l9z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j7l9z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-j7l9z,UID:9bdb5a38-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835126,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a439e7 0xc002a439e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.855: INFO: Pod "nginx-deployment-85ddf47c5d-jjgzb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jjgzb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-jjgzb,UID:9bd4f009-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835098,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43af7 0xc002a43af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.855: INFO: Pod "nginx-deployment-85ddf47c5d-kgxp4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kgxp4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-kgxp4,UID:9bd13797-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835132,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43c07 0xc002a43c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-07-01 08:55:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.855: INFO: Pod "nginx-deployment-85ddf47c5d-kqfck" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kqfck,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-kqfck,UID:9bd50143-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835139,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43d57 0xc002a43d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-01 08:55:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.856: INFO: Pod "nginx-deployment-85ddf47c5d-lncgg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lncgg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-lncgg,UID:929faa9a-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835002,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc002a43ea7 0xc002a43ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a43f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a43f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.118,StartTime:2020-07-01 08:55:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a64d5147171b49b9c172eadf314b4648b85123255b47bd7795235ad9eef38768}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.856: INFO: Pod "nginx-deployment-85ddf47c5d-n8frf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n8frf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-n8frf,UID:9bdb3ee1-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835128,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe047 0xc0026fe048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe0c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.856: INFO: Pod "nginx-deployment-85ddf47c5d-p9cq4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p9cq4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-p9cq4,UID:9bd69907-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835106,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe157 0xc0026fe158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.856: INFO: Pod "nginx-deployment-85ddf47c5d-q25vw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q25vw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-q25vw,UID:9296947b-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835009,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe327 0xc0026fe328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.119,StartTime:2020-07-01 08:55:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://46f1596e16c49ee71bb64a3a6b1153e6e763e5b43b08b5ef51487216f8f2f41c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.856: INFO: Pod "nginx-deployment-85ddf47c5d-q76gq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q76gq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-q76gq,UID:929fa72d-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834972,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe487 0xc0026fe488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe560} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.117,StartTime:2020-07-01 08:55:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://21303f89dd378fe5db8e8be05d4262c059fbeeb5a298aa76ba2dfb6beb1f2f1a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.857: INFO: Pod "nginx-deployment-85ddf47c5d-q8wpr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q8wpr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-q8wpr,UID:929fa8f0-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835001,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe647 0xc0026fe648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.22,StartTime:2020-07-01 08:55:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bfb878329f83a3b96ca20f7bd56ddba1c0dbd06bb6f7ab6ed2c65c412d20b5d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.857: INFO: Pod "nginx-deployment-85ddf47c5d-swz9r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-swz9r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-swz9r,UID:929d555e-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834974,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe7a7 0xc0026fe7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe820} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.20,StartTime:2020-07-01 08:55:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f13dc3e8937fe8cdacb8aa6db61c940d1fecb3750475dd6c589be6272e2b56f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.857: INFO: Pod "nginx-deployment-85ddf47c5d-xh25r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xh25r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-xh25r,UID:929fa5cc-bb78-11ea-99e8-0242ac110002,ResourceVersion:18834992,Generation:0,CreationTimestamp:2020-07-01 08:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fe907 0xc0026fe908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.21,StartTime:2020-07-01 08:55:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-01 08:55:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dfdcb92a76bf7a8ac8898dd2341bbaae0f455c89d8ff89a7a4f248079845a7f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 1 08:55:27.857: INFO: Pod "nginx-deployment-85ddf47c5d-zwng2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zwng2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh2nt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh2nt/pods/nginx-deployment-85ddf47c5d-zwng2,UID:9bd6a21a-bb78-11ea-99e8-0242ac110002,ResourceVersion:18835103,Generation:0,CreationTimestamp:2020-07-01 08:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9293c59f-bb78-11ea-99e8-0242ac110002 0xc0026fea67 0xc0026fea68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5qxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026feae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026feb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 08:55:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:55:27.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zh2nt" for this suite. Jul 1 08:55:50.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:55:50.181: INFO: namespace: e2e-tests-deployment-zh2nt, resource: bindings, ignored listing per whitelist Jul 1 08:55:50.241: INFO: namespace e2e-tests-deployment-zh2nt deletion completed in 22.258683946s • [SLOW TEST:38.563 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:55:50.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-479k STEP: Creating a pod to test atomic-volume-subpath Jul 1 08:55:50.838: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-479k" in namespace "e2e-tests-subpath-8g789" to be "success or failure" Jul 1 08:55:50.923: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Pending", Reason="", readiness=false. Elapsed: 85.416768ms Jul 1 08:55:52.927: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089520664s Jul 1 08:55:55.189: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351137581s Jul 1 08:55:57.194: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355956349s Jul 1 08:55:59.198: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 8.359780583s Jul 1 08:56:01.202: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 10.363756901s Jul 1 08:56:03.205: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 12.367381127s Jul 1 08:56:05.279: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 14.441186905s Jul 1 08:56:07.284: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 16.445832448s Jul 1 08:56:09.288: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 18.449764762s Jul 1 08:56:11.292: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 20.453978961s Jul 1 08:56:13.296: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 22.458236396s Jul 1 08:56:15.302: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Running", Reason="", readiness=false. Elapsed: 24.464169946s Jul 1 08:56:17.306: INFO: Pod "pod-subpath-test-downwardapi-479k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.468433241s STEP: Saw pod success Jul 1 08:56:17.306: INFO: Pod "pod-subpath-test-downwardapi-479k" satisfied condition "success or failure" Jul 1 08:56:17.309: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-479k container test-container-subpath-downwardapi-479k: STEP: delete the pod Jul 1 08:56:17.346: INFO: Waiting for pod pod-subpath-test-downwardapi-479k to disappear Jul 1 08:56:17.366: INFO: Pod pod-subpath-test-downwardapi-479k no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-479k Jul 1 08:56:17.366: INFO: Deleting pod "pod-subpath-test-downwardapi-479k" in namespace "e2e-tests-subpath-8g789" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:56:17.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8g789" for this suite. Jul 1 08:56:23.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:56:23.458: INFO: namespace: e2e-tests-subpath-8g789, resource: bindings, ignored listing per whitelist Jul 1 08:56:23.512: INFO: namespace e2e-tests-subpath-8g789 deletion completed in 6.14069054s • [SLOW TEST:33.271 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:56:23.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 1 08:56:23.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:23.930: INFO: stderr: "" Jul 1 08:56:23.930: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 08:56:23.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:24.097: INFO: stderr: "" Jul 1 08:56:24.097: INFO: stdout: "update-demo-nautilus-mxmq5 update-demo-nautilus-rn467 " Jul 1 08:56:24.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxmq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:24.219: INFO: stderr: "" Jul 1 08:56:24.219: INFO: stdout: "" Jul 1 08:56:24.219: INFO: update-demo-nautilus-mxmq5 is created but not running Jul 1 08:56:29.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:29.350: INFO: stderr: "" Jul 1 08:56:29.350: INFO: stdout: "update-demo-nautilus-mxmq5 update-demo-nautilus-rn467 " Jul 1 08:56:29.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxmq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:29.445: INFO: stderr: "" Jul 1 08:56:29.445: INFO: stdout: "true" Jul 1 08:56:29.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxmq5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:29.540: INFO: stderr: "" Jul 1 08:56:29.540: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 08:56:29.540: INFO: validating pod update-demo-nautilus-mxmq5 Jul 1 08:56:29.553: INFO: got data: { "image": "nautilus.jpg" } Jul 1 08:56:29.553: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 08:56:29.553: INFO: update-demo-nautilus-mxmq5 is verified up and running Jul 1 08:56:29.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rn467 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:29.658: INFO: stderr: "" Jul 1 08:56:29.658: INFO: stdout: "true" Jul 1 08:56:29.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rn467 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:29.776: INFO: stderr: "" Jul 1 08:56:29.776: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 08:56:29.776: INFO: validating pod update-demo-nautilus-rn467 Jul 1 08:56:29.788: INFO: got data: { "image": "nautilus.jpg" } Jul 1 08:56:29.788: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 08:56:29.788: INFO: update-demo-nautilus-rn467 is verified up and running STEP: using delete to clean up resources Jul 1 08:56:29.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:29.900: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 08:56:29.900: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 08:56:29.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kkhfd' Jul 1 08:56:30.003: INFO: stderr: "No resources found.\n" Jul 1 08:56:30.003: INFO: stdout: "" Jul 1 08:56:30.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kkhfd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 08:56:30.116: INFO: stderr: "" Jul 1 08:56:30.116: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:56:30.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kkhfd" for this suite. Jul 1 08:56:52.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:56:52.209: INFO: namespace: e2e-tests-kubectl-kkhfd, resource: bindings, ignored listing per whitelist Jul 1 08:56:52.260: INFO: namespace e2e-tests-kubectl-kkhfd deletion completed in 22.139957042s • [SLOW TEST:28.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:56:52.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-ce83e23f-bb78-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:56:52.391: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-hmjkz" to be "success or failure" Jul 1 08:56:52.408: INFO: Pod "pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.11183ms Jul 1 08:56:54.412: INFO: Pod "pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0208489s Jul 1 08:56:56.440: INFO: Pod "pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049573169s STEP: Saw pod success Jul 1 08:56:56.440: INFO: Pod "pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:56:56.443: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 08:56:56.502: INFO: Waiting for pod pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018 to disappear Jul 1 08:56:56.567: INFO: Pod pod-configmaps-ce87b1ad-bb78-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:56:56.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hmjkz" for this suite. Jul 1 08:57:04.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:57:04.712: INFO: namespace: e2e-tests-configmap-hmjkz, resource: bindings, ignored listing per whitelist Jul 1 08:57:04.719: INFO: namespace e2e-tests-configmap-hmjkz deletion completed in 8.147883349s • [SLOW TEST:12.459 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:57:04.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-9l2sw STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-9l2sw STEP: Deleting pre-stop pod Jul 1 08:57:19.454: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:57:19.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-9l2sw" for this suite. Jul 1 08:57:57.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:57:57.542: INFO: namespace: e2e-tests-prestop-9l2sw, resource: bindings, ignored listing per whitelist Jul 1 08:57:57.594: INFO: namespace e2e-tests-prestop-9l2sw deletion completed in 38.109091065s • [SLOW TEST:52.874 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:57:57.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 1 08:57:57.714: INFO: PodSpec: initContainers in spec.initContainers Jul 1 08:58:46.360: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f57aa84c-bb78-11ea-a133-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-7jbm9", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-7jbm9/pods/pod-init-f57aa84c-bb78-11ea-a133-0242ac110018", UID:"f57d730d-bb78-11ea-99e8-0242ac110002", ResourceVersion:"18835913", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729190677, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"714755637"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j6ztn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c8d340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j6ztn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j6ztn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j6ztn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0017c00f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00199dd40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0017c0180)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0017c01a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0017c01a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0017c01ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729190677, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729190677, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729190677, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729190677, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.41", StartTime:(*v1.Time)(0xc0017ad600), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00150a9a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00150aa10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6fc26687cc0bfdbf932c37c81c26da28a15c3017f4d19c6fe19dd2c7ea5ba321"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0017ad660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0017ad640), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:58:46.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7jbm9" for this suite. Jul 1 08:59:08.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:59:08.468: INFO: namespace: e2e-tests-init-container-7jbm9, resource: bindings, ignored listing per whitelist Jul 1 08:59:08.527: INFO: namespace e2e-tests-init-container-7jbm9 deletion completed in 22.095033855s • [SLOW TEST:70.933 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:59:08.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1fbdefe3-bb79-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 08:59:08.640: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-2nt57" to be "success or failure" Jul 1 08:59:08.663: INFO: Pod "pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.806689ms Jul 1 08:59:10.719: INFO: Pod "pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079130026s Jul 1 08:59:12.723: INFO: Pod "pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083119887s STEP: Saw pod success Jul 1 08:59:12.723: INFO: Pod "pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:59:12.726: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 08:59:12.894: INFO: Waiting for pod pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018 to disappear Jul 1 08:59:12.908: INFO: Pod pod-projected-configmaps-1fbfe42f-bb79-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:59:12.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2nt57" for this suite. Jul 1 08:59:18.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:59:18.933: INFO: namespace: e2e-tests-projected-2nt57, resource: bindings, ignored listing per whitelist Jul 1 08:59:18.998: INFO: namespace e2e-tests-projected-2nt57 deletion completed in 6.085928894s • [SLOW TEST:10.471 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:59:18.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 08:59:19.170: INFO: Waiting up to 5m0s for pod "downward-api-25fdff57-bb79-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-6ksss" to be "success or failure" Jul 1 08:59:19.193: INFO: Pod "downward-api-25fdff57-bb79-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.440705ms Jul 1 08:59:21.217: INFO: Pod "downward-api-25fdff57-bb79-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046324061s Jul 1 08:59:23.220: INFO: Pod "downward-api-25fdff57-bb79-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049884553s STEP: Saw pod success Jul 1 08:59:23.220: INFO: Pod "downward-api-25fdff57-bb79-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 08:59:23.223: INFO: Trying to get logs from node hunter-worker pod downward-api-25fdff57-bb79-11ea-a133-0242ac110018 container dapi-container: STEP: delete the pod Jul 1 08:59:23.295: INFO: Waiting for pod downward-api-25fdff57-bb79-11ea-a133-0242ac110018 to disappear Jul 1 08:59:23.324: INFO: Pod downward-api-25fdff57-bb79-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 08:59:23.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6ksss" for this suite. Jul 1 08:59:29.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 08:59:29.346: INFO: namespace: e2e-tests-downward-api-6ksss, resource: bindings, ignored listing per whitelist Jul 1 08:59:29.412: INFO: namespace e2e-tests-downward-api-6ksss deletion completed in 6.084810918s • [SLOW TEST:10.413 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 08:59:29.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 08:59:29.523: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 1 08:59:29.529: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:29.531: INFO: Number of nodes with available pods: 0 Jul 1 08:59:29.531: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:30.536: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:30.540: INFO: Number of nodes with available pods: 0 Jul 1 08:59:30.540: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:31.976: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:32.011: INFO: Number of nodes with available pods: 0 Jul 1 08:59:32.011: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:32.698: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:32.771: INFO: Number of nodes with available pods: 0 Jul 1 08:59:32.771: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:33.791: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:33.795: INFO: Number of nodes with available pods: 0 Jul 1 08:59:33.795: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:34.770: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:35.132: INFO: Number of nodes with available pods: 0 Jul 1 08:59:35.132: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:35.810: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:36.062: INFO: Number of nodes with available pods: 0 Jul 1 08:59:36.062: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:36.607: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:36.610: INFO: Number of nodes with available pods: 0 Jul 1 08:59:36.610: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:37.854: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:38.230: INFO: Number of nodes with available pods: 1 Jul 1 08:59:38.230: INFO: Node hunter-worker is running more than one daemon pod Jul 1 08:59:38.611: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:38.672: INFO: Number of nodes with available pods: 2 Jul 1 08:59:38.673: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 1 08:59:39.833: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:39.834: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:39.838: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:41.259: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:41.259: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:41.262: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:41.892: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:41.892: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:42.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:43.151: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:43.151: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:43.155: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:43.841: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:43.842: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:43.844: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:44.857: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:44.857: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:44.857: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:44.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:45.941: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:45.941: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:45.941: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:45.944: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:46.842: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:46.842: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:46.842: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:46.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:47.843: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:47.843: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:47.843: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:47.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:48.841: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:48.841: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:48.841: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:48.844: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:49.841: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:49.841: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:49.841: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:49.844: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:51.127: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:51.127: INFO: Wrong image for pod: daemon-set-xz8q8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:51.127: INFO: Pod daemon-set-xz8q8 is not available Jul 1 08:59:51.130: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:51.843: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:51.843: INFO: Pod daemon-set-v7cl5 is not available Jul 1 08:59:51.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:53.954: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:53.954: INFO: Pod daemon-set-v7cl5 is not available Jul 1 08:59:56.224: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:57.201: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:57.201: INFO: Pod daemon-set-v7cl5 is not available Jul 1 08:59:57.205: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:57.841: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:57.842: INFO: Pod daemon-set-v7cl5 is not available Jul 1 08:59:57.845: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:58.843: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:58.843: INFO: Pod daemon-set-v7cl5 is not available Jul 1 08:59:58.847: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 08:59:59.842: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 08:59:59.845: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:00.894: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 09:00:00.898: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:01.842: INFO: Wrong image for pod: daemon-set-pb2xb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 09:00:01.842: INFO: Pod daemon-set-pb2xb is not available Jul 1 09:00:01.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:02.857: INFO: Pod daemon-set-46hwc is not available Jul 1 09:00:02.861: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 1 09:00:02.864: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:02.868: INFO: Number of nodes with available pods: 1 Jul 1 09:00:02.868: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:03.872: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:03.875: INFO: Number of nodes with available pods: 1 Jul 1 09:00:03.875: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:04.882: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:04.883: INFO: Number of nodes with available pods: 1 Jul 1 09:00:04.883: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:05.872: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:05.876: INFO: Number of nodes with available pods: 1 Jul 1 09:00:05.876: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:06.960: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:06.964: INFO: Number of nodes with available pods: 2 Jul 1 09:00:06.964: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-l26pb, will wait for the garbage collector to delete the pods Jul 1 09:00:07.269: INFO: Deleting DaemonSet.extensions daemon-set took: 6.693901ms Jul 1 09:00:07.369: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.19339ms Jul 1 09:00:21.775: INFO: Number of nodes with available pods: 0 Jul 1 09:00:21.775: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 09:00:21.778: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-l26pb/daemonsets","resourceVersion":"18836227"},"items":null} Jul 1 09:00:21.780: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l26pb/pods","resourceVersion":"18836227"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:00:21.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-l26pb" for this suite. Jul 1 09:00:27.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:00:27.903: INFO: namespace: e2e-tests-daemonsets-l26pb, resource: bindings, ignored listing per whitelist Jul 1 09:00:27.948: INFO: namespace e2e-tests-daemonsets-l26pb deletion completed in 6.154320492s • [SLOW TEST:58.535 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:00:27.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 1 09:00:28.084: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:28.121: INFO: Number of nodes with available pods: 0 Jul 1 09:00:28.121: INFO: Node hunter-worker is running more than one daemon pod Jul 1 09:00:29.127: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:29.130: INFO: Number of nodes with available pods: 0 Jul 1 09:00:29.130: INFO: Node hunter-worker is running more than one daemon pod Jul 1 09:00:30.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:30.130: INFO: Number of nodes with available pods: 0 Jul 1 09:00:30.130: INFO: Node hunter-worker is running more than one daemon pod Jul 1 09:00:31.482: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:31.486: INFO: Number of nodes with available pods: 0 Jul 1 09:00:31.486: INFO: Node hunter-worker is running more than one daemon pod Jul 1 09:00:32.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:32.128: INFO: Number of nodes with available pods: 0 Jul 1 09:00:32.128: INFO: Node hunter-worker is running more than one daemon pod Jul 1 09:00:33.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:33.129: INFO: Number of nodes with available pods: 1 Jul 1 09:00:33.129: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:34.428: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:34.431: INFO: Number of nodes with available pods: 2 Jul 1 09:00:34.431: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 1 09:00:34.468: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:34.471: INFO: Number of nodes with available pods: 1 Jul 1 09:00:34.471: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:35.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:35.478: INFO: Number of nodes with available pods: 1 Jul 1 09:00:35.478: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:36.553: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:36.606: INFO: Number of nodes with available pods: 1 Jul 1 09:00:36.606: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:37.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:37.478: INFO: Number of nodes with available pods: 1 Jul 1 09:00:37.478: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:38.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:38.478: INFO: Number of nodes with available pods: 1 Jul 1 09:00:38.478: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:39.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:39.478: INFO: Number of nodes with available pods: 1 Jul 1 09:00:39.478: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:40.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:40.478: INFO: Number of nodes with available pods: 1 Jul 1 09:00:40.478: INFO: Node hunter-worker2 is running more than one daemon pod Jul 1 09:00:41.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 09:00:41.478: INFO: Number of nodes with available pods: 2 Jul 1 09:00:41.478: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kxb44, will wait for the garbage collector to delete the pods Jul 1 09:00:41.540: INFO: Deleting DaemonSet.extensions daemon-set took: 6.664286ms Jul 1 09:00:41.640: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.229279ms Jul 1 09:00:46.543: INFO: Number of nodes with available pods: 0 Jul 1 09:00:46.543: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 09:00:46.545: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kxb44/daemonsets","resourceVersion":"18836360"},"items":null} Jul 1 09:00:46.547: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kxb44/pods","resourceVersion":"18836360"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:00:46.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kxb44" for this suite. Jul 1 09:00:52.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:00:52.725: INFO: namespace: e2e-tests-daemonsets-kxb44, resource: bindings, ignored listing per whitelist Jul 1 09:00:52.728: INFO: namespace e2e-tests-daemonsets-kxb44 deletion completed in 6.169004019s • [SLOW TEST:24.780 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:00:52.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:01:52.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-59dsv" for this suite. Jul 1 09:02:14.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:02:15.011: INFO: namespace: e2e-tests-container-probe-59dsv, resource: bindings, ignored listing per whitelist Jul 1 09:02:15.016: INFO: namespace e2e-tests-container-probe-59dsv deletion completed in 22.080999252s • [SLOW TEST:82.288 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:02:15.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4xnsk [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4xnsk STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4xnsk Jul 1 09:02:15.410: INFO: Found 0 stateful pods, waiting for 1 Jul 1 09:02:25.419: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 1 09:02:25.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 09:02:25.894: INFO: stderr: "I0701 09:02:25.598980 1939 log.go:172] (0xc00015e840) (0xc0005ed400) Create stream\nI0701 09:02:25.599037 1939 log.go:172] (0xc00015e840) (0xc0005ed400) Stream added, broadcasting: 1\nI0701 09:02:25.601269 1939 log.go:172] (0xc00015e840) Reply frame received for 1\nI0701 09:02:25.601321 1939 log.go:172] (0xc00015e840) (0xc000554000) Create stream\nI0701 09:02:25.601331 1939 log.go:172] (0xc00015e840) (0xc000554000) Stream added, broadcasting: 3\nI0701 09:02:25.602224 1939 log.go:172] (0xc00015e840) Reply frame received for 3\nI0701 09:02:25.602267 1939 log.go:172] (0xc00015e840) (0xc000668000) Create stream\nI0701 09:02:25.602291 1939 log.go:172] (0xc00015e840) (0xc000668000) Stream added, broadcasting: 5\nI0701 09:02:25.603242 1939 log.go:172] (0xc00015e840) Reply frame received for 5\nI0701 09:02:25.885495 1939 log.go:172] (0xc00015e840) Data frame received for 5\nI0701 09:02:25.885531 1939 log.go:172] (0xc000668000) (5) Data frame handling\nI0701 09:02:25.885554 1939 log.go:172] (0xc00015e840) Data frame received for 3\nI0701 09:02:25.885562 1939 log.go:172] (0xc000554000) (3) Data frame handling\nI0701 09:02:25.885570 1939 log.go:172] (0xc000554000) (3) Data frame sent\nI0701 09:02:25.885577 1939 log.go:172] (0xc00015e840) Data frame received for 3\nI0701 09:02:25.885582 1939 log.go:172] (0xc000554000) (3) Data frame handling\nI0701 09:02:25.887537 1939 log.go:172] (0xc00015e840) Data frame received for 1\nI0701 09:02:25.887582 1939 log.go:172] (0xc0005ed400) (1) Data frame handling\nI0701 09:02:25.887610 1939 log.go:172] (0xc0005ed400) (1) Data frame sent\nI0701 09:02:25.887701 1939 log.go:172] (0xc00015e840) (0xc0005ed400) Stream removed, broadcasting: 1\nI0701 09:02:25.887739 1939 log.go:172] (0xc00015e840) Go away received\nI0701 09:02:25.887925 1939 log.go:172] (0xc00015e840) (0xc0005ed400) Stream removed, broadcasting: 1\nI0701 09:02:25.887952 1939 log.go:172] (0xc00015e840) (0xc000554000) Stream removed, broadcasting: 3\nI0701 09:02:25.887965 1939 log.go:172] (0xc00015e840) (0xc000668000) Stream removed, broadcasting: 5\n" Jul 1 09:02:25.895: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 09:02:25.895: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 09:02:25.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 09:02:35.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 09:02:35.903: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 09:02:35.920: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:02:35.920: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC }] Jul 1 09:02:35.920: INFO: Jul 1 09:02:35.920: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 1 09:02:36.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993272281s Jul 1 09:02:37.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988614265s Jul 1 09:02:38.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962644992s Jul 1 09:02:40.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.920305961s Jul 1 09:02:41.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.904950165s Jul 1 09:02:42.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.891448223s Jul 1 09:02:43.152: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.765497325s Jul 1 09:02:44.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.760565864s Jul 1 09:02:45.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 619.92625ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4xnsk Jul 1 09:02:46.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:02:46.532: INFO: stderr: "I0701 09:02:46.446383 1962 log.go:172] (0xc0006e24d0) (0xc000744640) Create stream\nI0701 09:02:46.446443 1962 log.go:172] (0xc0006e24d0) (0xc000744640) Stream added, broadcasting: 1\nI0701 09:02:46.449976 1962 log.go:172] (0xc0006e24d0) Reply frame received for 1\nI0701 09:02:46.450044 1962 log.go:172] (0xc0006e24d0) (0xc0007446e0) Create stream\nI0701 09:02:46.450063 1962 log.go:172] (0xc0006e24d0) (0xc0007446e0) Stream added, broadcasting: 3\nI0701 09:02:46.451016 1962 log.go:172] (0xc0006e24d0) Reply frame received for 3\nI0701 09:02:46.451055 1962 log.go:172] (0xc0006e24d0) (0xc0007e6be0) Create stream\nI0701 09:02:46.451067 1962 log.go:172] (0xc0006e24d0) (0xc0007e6be0) Stream added, broadcasting: 5\nI0701 09:02:46.451912 1962 log.go:172] (0xc0006e24d0) Reply frame received for 5\nI0701 09:02:46.521076 1962 log.go:172] (0xc0006e24d0) Data frame received for 3\nI0701 09:02:46.521104 1962 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0701 09:02:46.521247 1962 log.go:172] (0xc0007446e0) (3) Data frame sent\nI0701 09:02:46.521259 1962 log.go:172] (0xc0006e24d0) Data frame received for 3\nI0701 09:02:46.521265 1962 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0701 09:02:46.521328 1962 log.go:172] (0xc0006e24d0) Data frame received for 5\nI0701 09:02:46.521359 1962 log.go:172] (0xc0007e6be0) (5) Data frame handling\nI0701 09:02:46.524393 1962 log.go:172] (0xc0006e24d0) Data frame received for 1\nI0701 09:02:46.524421 1962 log.go:172] (0xc000744640) (1) Data frame handling\nI0701 09:02:46.524436 1962 log.go:172] (0xc000744640) (1) Data frame sent\nI0701 09:02:46.524460 1962 log.go:172] (0xc0006e24d0) (0xc000744640) Stream removed, broadcasting: 1\nI0701 09:02:46.524485 1962 log.go:172] (0xc0006e24d0) Go away received\nI0701 09:02:46.524914 1962 log.go:172] (0xc0006e24d0) (0xc000744640) Stream removed, broadcasting: 1\nI0701 09:02:46.524951 1962 log.go:172] (0xc0006e24d0) (0xc0007446e0) Stream removed, broadcasting: 3\nI0701 09:02:46.524965 1962 log.go:172] (0xc0006e24d0) (0xc0007e6be0) Stream removed, broadcasting: 5\n" Jul 1 09:02:46.532: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 09:02:46.532: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 09:02:46.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:02:46.804: INFO: stderr: "I0701 09:02:46.724046 1984 log.go:172] (0xc0001628f0) (0xc000742640) Create stream\nI0701 09:02:46.724097 1984 log.go:172] (0xc0001628f0) (0xc000742640) Stream added, broadcasting: 1\nI0701 09:02:46.726388 1984 log.go:172] (0xc0001628f0) Reply frame received for 1\nI0701 09:02:46.726441 1984 log.go:172] (0xc0001628f0) (0xc000670dc0) Create stream\nI0701 09:02:46.726455 1984 log.go:172] (0xc0001628f0) (0xc000670dc0) Stream added, broadcasting: 3\nI0701 09:02:46.727234 1984 log.go:172] (0xc0001628f0) Reply frame received for 3\nI0701 09:02:46.727270 1984 log.go:172] (0xc0001628f0) (0xc0006fe000) Create stream\nI0701 09:02:46.727287 1984 log.go:172] (0xc0001628f0) (0xc0006fe000) Stream added, broadcasting: 5\nI0701 09:02:46.728130 1984 log.go:172] (0xc0001628f0) Reply frame received for 5\nI0701 09:02:46.794834 1984 log.go:172] (0xc0001628f0) Data frame received for 5\nI0701 09:02:46.794870 1984 log.go:172] (0xc0006fe000) (5) Data frame handling\nI0701 09:02:46.794879 1984 log.go:172] (0xc0006fe000) (5) Data frame sent\nI0701 09:02:46.794884 1984 log.go:172] (0xc0001628f0) Data frame received for 5\nI0701 09:02:46.794888 1984 log.go:172] (0xc0006fe000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0701 09:02:46.794925 1984 log.go:172] (0xc0001628f0) Data frame received for 3\nI0701 09:02:46.794968 1984 log.go:172] (0xc000670dc0) (3) Data frame handling\nI0701 09:02:46.795004 1984 log.go:172] (0xc000670dc0) (3) Data frame sent\nI0701 09:02:46.795049 1984 log.go:172] (0xc0001628f0) Data frame received for 3\nI0701 09:02:46.795071 1984 log.go:172] (0xc000670dc0) (3) Data frame handling\nI0701 09:02:46.796715 1984 log.go:172] (0xc0001628f0) Data frame received for 1\nI0701 09:02:46.796735 1984 log.go:172] (0xc000742640) (1) Data frame handling\nI0701 09:02:46.796748 1984 log.go:172] (0xc000742640) (1) Data frame sent\nI0701 09:02:46.796759 1984 log.go:172] (0xc0001628f0) (0xc000742640) Stream removed, broadcasting: 1\nI0701 09:02:46.796768 1984 log.go:172] (0xc0001628f0) Go away received\nI0701 09:02:46.797362 1984 log.go:172] (0xc0001628f0) (0xc000742640) Stream removed, broadcasting: 1\nI0701 09:02:46.797400 1984 log.go:172] (0xc0001628f0) (0xc000670dc0) Stream removed, broadcasting: 3\nI0701 09:02:46.797418 1984 log.go:172] (0xc0001628f0) (0xc0006fe000) Stream removed, broadcasting: 5\n" Jul 1 09:02:46.804: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 09:02:46.804: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 09:02:46.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:02:46.999: INFO: stderr: "I0701 09:02:46.916418 2007 log.go:172] (0xc00071c370) (0xc0006a5400) Create stream\nI0701 09:02:46.916511 2007 log.go:172] (0xc00071c370) (0xc0006a5400) Stream added, broadcasting: 1\nI0701 09:02:46.918787 2007 log.go:172] (0xc00071c370) Reply frame received for 1\nI0701 09:02:46.918840 2007 log.go:172] (0xc00071c370) (0xc0006a54a0) Create stream\nI0701 09:02:46.918855 2007 log.go:172] (0xc00071c370) (0xc0006a54a0) Stream added, broadcasting: 3\nI0701 09:02:46.919856 2007 log.go:172] (0xc00071c370) Reply frame received for 3\nI0701 09:02:46.919904 2007 log.go:172] (0xc00071c370) (0xc000416000) Create stream\nI0701 09:02:46.919924 2007 log.go:172] (0xc00071c370) (0xc000416000) Stream added, broadcasting: 5\nI0701 09:02:46.920922 2007 log.go:172] (0xc00071c370) Reply frame received for 5\nI0701 09:02:46.991582 2007 log.go:172] (0xc00071c370) Data frame received for 3\nI0701 09:02:46.991605 2007 log.go:172] (0xc0006a54a0) (3) Data frame handling\nI0701 09:02:46.991614 2007 log.go:172] (0xc0006a54a0) (3) Data frame sent\nI0701 09:02:46.991620 2007 log.go:172] (0xc00071c370) Data frame received for 3\nI0701 09:02:46.991624 2007 log.go:172] (0xc0006a54a0) (3) Data frame handling\nI0701 09:02:46.991801 2007 log.go:172] (0xc00071c370) Data frame received for 5\nI0701 09:02:46.991822 2007 log.go:172] (0xc000416000) (5) Data frame handling\nI0701 09:02:46.991841 2007 log.go:172] (0xc000416000) (5) Data frame sent\nI0701 09:02:46.991854 2007 log.go:172] (0xc00071c370) Data frame received for 5\nI0701 09:02:46.991863 2007 log.go:172] (0xc000416000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0701 09:02:46.993322 2007 log.go:172] (0xc00071c370) Data frame received for 1\nI0701 09:02:46.993349 2007 log.go:172] (0xc0006a5400) (1) Data frame handling\nI0701 09:02:46.993361 2007 log.go:172] (0xc0006a5400) (1) Data frame sent\nI0701 09:02:46.993380 2007 log.go:172] (0xc00071c370) (0xc0006a5400) Stream removed, broadcasting: 1\nI0701 09:02:46.993397 2007 log.go:172] (0xc00071c370) Go away received\nI0701 09:02:46.993621 2007 log.go:172] (0xc00071c370) (0xc0006a5400) Stream removed, broadcasting: 1\nI0701 09:02:46.993645 2007 log.go:172] (0xc00071c370) (0xc0006a54a0) Stream removed, broadcasting: 3\nI0701 09:02:46.993655 2007 log.go:172] (0xc00071c370) (0xc000416000) Stream removed, broadcasting: 5\n" Jul 1 09:02:46.999: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 09:02:46.999: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 09:02:47.068: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 09:02:47.068: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 09:02:47.068: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 1 09:02:47.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 09:02:47.657: INFO: stderr: "I0701 09:02:47.597617 2030 log.go:172] (0xc0007d0160) (0xc000738640) Create stream\nI0701 09:02:47.597664 2030 log.go:172] (0xc0007d0160) (0xc000738640) Stream added, broadcasting: 1\nI0701 09:02:47.599762 2030 log.go:172] (0xc0007d0160) Reply frame received for 1\nI0701 09:02:47.599816 2030 log.go:172] (0xc0007d0160) (0xc0004fac80) Create stream\nI0701 09:02:47.599831 2030 log.go:172] (0xc0007d0160) (0xc0004fac80) Stream added, broadcasting: 3\nI0701 09:02:47.600671 2030 log.go:172] (0xc0007d0160) Reply frame received for 3\nI0701 09:02:47.600697 2030 log.go:172] (0xc0007d0160) (0xc0007386e0) Create stream\nI0701 09:02:47.600705 2030 log.go:172] (0xc0007d0160) (0xc0007386e0) Stream added, broadcasting: 5\nI0701 09:02:47.601736 2030 log.go:172] (0xc0007d0160) Reply frame received for 5\nI0701 09:02:47.647172 2030 log.go:172] (0xc0007d0160) Data frame received for 5\nI0701 09:02:47.647203 2030 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0701 09:02:47.647260 2030 log.go:172] (0xc0007d0160) Data frame received for 3\nI0701 09:02:47.647297 2030 log.go:172] (0xc0004fac80) (3) Data frame handling\nI0701 09:02:47.647317 2030 log.go:172] (0xc0004fac80) (3) Data frame sent\nI0701 09:02:47.647336 2030 log.go:172] (0xc0007d0160) Data frame received for 3\nI0701 09:02:47.647353 2030 log.go:172] (0xc0004fac80) (3) Data frame handling\nI0701 09:02:47.648693 2030 log.go:172] (0xc0007d0160) Data frame received for 1\nI0701 09:02:47.648734 2030 log.go:172] (0xc000738640) (1) Data frame handling\nI0701 09:02:47.648765 2030 log.go:172] (0xc000738640) (1) Data frame sent\nI0701 09:02:47.648794 2030 log.go:172] (0xc0007d0160) (0xc000738640) Stream removed, broadcasting: 1\nI0701 09:02:47.649003 2030 log.go:172] (0xc0007d0160) (0xc000738640) Stream removed, broadcasting: 1\nI0701 09:02:47.649043 2030 log.go:172] (0xc0007d0160) (0xc0004fac80) Stream removed, broadcasting: 3\nI0701 09:02:47.649068 2030 log.go:172] (0xc0007d0160) (0xc0007386e0) Stream removed, broadcasting: 5\n" Jul 1 09:02:47.657: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 09:02:47.657: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 09:02:47.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 09:02:48.361: INFO: stderr: "I0701 09:02:47.933328 2052 log.go:172] (0xc00080e2c0) (0xc000605400) Create stream\nI0701 09:02:47.933405 2052 log.go:172] (0xc00080e2c0) (0xc000605400) Stream added, broadcasting: 1\nI0701 09:02:47.936100 2052 log.go:172] (0xc00080e2c0) Reply frame received for 1\nI0701 09:02:47.936139 2052 log.go:172] (0xc00080e2c0) (0xc0006054a0) Create stream\nI0701 09:02:47.936155 2052 log.go:172] (0xc00080e2c0) (0xc0006054a0) Stream added, broadcasting: 3\nI0701 09:02:47.937422 2052 log.go:172] (0xc00080e2c0) Reply frame received for 3\nI0701 09:02:47.937482 2052 log.go:172] (0xc00080e2c0) (0xc000605540) Create stream\nI0701 09:02:47.937496 2052 log.go:172] (0xc00080e2c0) (0xc000605540) Stream added, broadcasting: 5\nI0701 09:02:47.938563 2052 log.go:172] (0xc00080e2c0) Reply frame received for 5\nI0701 09:02:48.352775 2052 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0701 09:02:48.352818 2052 log.go:172] (0xc0006054a0) (3) Data frame handling\nI0701 09:02:48.352942 2052 log.go:172] (0xc00080e2c0) Data frame received for 5\nI0701 09:02:48.352970 2052 log.go:172] (0xc000605540) (5) Data frame handling\nI0701 09:02:48.353005 2052 log.go:172] (0xc0006054a0) (3) Data frame sent\nI0701 09:02:48.353028 2052 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0701 09:02:48.353041 2052 log.go:172] (0xc0006054a0) (3) Data frame handling\nI0701 09:02:48.355077 2052 log.go:172] (0xc00080e2c0) Data frame received for 1\nI0701 09:02:48.355106 2052 log.go:172] (0xc000605400) (1) Data frame handling\nI0701 09:02:48.355124 2052 log.go:172] (0xc000605400) (1) Data frame sent\nI0701 09:02:48.355141 2052 log.go:172] (0xc00080e2c0) (0xc000605400) Stream removed, broadcasting: 1\nI0701 09:02:48.355225 2052 log.go:172] (0xc00080e2c0) Go away received\nI0701 09:02:48.355412 2052 log.go:172] (0xc00080e2c0) (0xc000605400) Stream removed, broadcasting: 1\nI0701 09:02:48.355434 2052 log.go:172] (0xc00080e2c0) (0xc0006054a0) Stream removed, broadcasting: 3\nI0701 09:02:48.355452 2052 log.go:172] (0xc00080e2c0) (0xc000605540) Stream removed, broadcasting: 5\n" Jul 1 09:02:48.361: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 09:02:48.361: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 09:02:48.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 09:02:48.578: INFO: stderr: "I0701 09:02:48.479323 2075 log.go:172] (0xc0008322c0) (0xc0005ab360) Create stream\nI0701 09:02:48.479397 2075 log.go:172] (0xc0008322c0) (0xc0005ab360) Stream added, broadcasting: 1\nI0701 09:02:48.481608 2075 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0701 09:02:48.481646 2075 log.go:172] (0xc0008322c0) (0xc000362000) Create stream\nI0701 09:02:48.481658 2075 log.go:172] (0xc0008322c0) (0xc000362000) Stream added, broadcasting: 3\nI0701 09:02:48.482410 2075 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0701 09:02:48.482448 2075 log.go:172] (0xc0008322c0) (0xc0005ab400) Create stream\nI0701 09:02:48.482459 2075 log.go:172] (0xc0008322c0) (0xc0005ab400) Stream added, broadcasting: 5\nI0701 09:02:48.483188 2075 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0701 09:02:48.565759 2075 log.go:172] (0xc0008322c0) Data frame received for 3\nI0701 09:02:48.565790 2075 log.go:172] (0xc000362000) (3) Data frame handling\nI0701 09:02:48.565813 2075 log.go:172] (0xc000362000) (3) Data frame sent\nI0701 09:02:48.565825 2075 log.go:172] (0xc0008322c0) Data frame received for 3\nI0701 09:02:48.565833 2075 log.go:172] (0xc000362000) (3) Data frame handling\nI0701 09:02:48.565908 2075 log.go:172] (0xc0008322c0) Data frame received for 5\nI0701 09:02:48.565977 2075 log.go:172] (0xc0005ab400) (5) Data frame handling\nI0701 09:02:48.567629 2075 log.go:172] (0xc0008322c0) Data frame received for 1\nI0701 09:02:48.567645 2075 log.go:172] (0xc0005ab360) (1) Data frame handling\nI0701 09:02:48.567653 2075 log.go:172] (0xc0005ab360) (1) Data frame sent\nI0701 09:02:48.567663 2075 log.go:172] (0xc0008322c0) (0xc0005ab360) Stream removed, broadcasting: 1\nI0701 09:02:48.567722 2075 log.go:172] (0xc0008322c0) Go away received\nI0701 09:02:48.567809 2075 log.go:172] (0xc0008322c0) (0xc0005ab360) Stream removed, broadcasting: 1\nI0701 09:02:48.567824 2075 log.go:172] (0xc0008322c0) (0xc000362000) Stream removed, broadcasting: 3\nI0701 09:02:48.567834 2075 log.go:172] (0xc0008322c0) (0xc0005ab400) Stream removed, broadcasting: 5\n" Jul 1 09:02:48.578: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 09:02:48.578: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 09:02:48.578: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 09:02:48.581: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 1 09:02:58.590: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 09:02:58.590: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 09:02:58.590: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 09:02:58.645: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:02:58.645: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC }] Jul 1 09:02:58.645: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:02:58.645: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:02:58.645: INFO: Jul 1 09:02:58.645: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 09:02:59.651: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:02:59.651: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC }] Jul 1 09:02:59.651: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:02:59.651: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:02:59.651: INFO: Jul 1 09:02:59.651: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 09:03:00.663: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:00.663: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC }] Jul 1 09:03:00.663: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:00.663: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:00.663: INFO: Jul 1 09:03:00.663: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 09:03:01.668: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:01.668: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:15 +0000 UTC }] Jul 1 09:03:01.668: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:01.668: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:01.668: INFO: Jul 1 09:03:01.668: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 09:03:02.672: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:02.672: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:02.672: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:02.672: INFO: Jul 1 09:03:02.672: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 09:03:03.712: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:03.712: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:03.712: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:03.712: INFO: Jul 1 09:03:03.712: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 09:03:04.719: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:04.719: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:04.719: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:04.719: INFO: Jul 1 09:03:04.719: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 09:03:05.728: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:05.728: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:05.728: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:05.728: INFO: Jul 1 09:03:05.728: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 09:03:06.733: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:06.733: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:06.733: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:06.733: INFO: Jul 1 09:03:06.733: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 09:03:07.788: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 09:03:07.788: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:07.788: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:02:35 +0000 UTC }] Jul 1 09:03:07.788: INFO: Jul 1 09:03:07.788: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4xnsk Jul 1 09:03:08.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:03:08.926: INFO: rc: 1 Jul 1 09:03:08.926: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0026245a0 exit status 1 true [0xc00115c0f0 0xc00115c108 0xc00115c120] [0xc00115c0f0 0xc00115c108 0xc00115c120] [0xc00115c100 0xc00115c118] [0x9355a0 0x9355a0] 0xc0027f2ae0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jul 1 09:03:18.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:03:19.016: INFO: rc: 1 Jul 1 09:03:19.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020ce6c0 exit status 1 true [0xc001dda1f8 0xc001dda230 0xc001dda290] [0xc001dda1f8 0xc001dda230 0xc001dda290] [0xc001dda220 0xc001dda280] [0x9355a0 0x9355a0] 0xc00226c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:03:29.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:03:29.105: INFO: rc: 1 Jul 1 09:03:29.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026246f0 exit status 1 true [0xc00115c128 0xc00115c140 0xc00115c158] [0xc00115c128 0xc00115c140 0xc00115c158] [0xc00115c138 0xc00115c150] [0x9355a0 0x9355a0] 0xc0027f3bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:03:39.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:03:39.199: INFO: rc: 1 Jul 1 09:03:39.199: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb1140 exit status 1 true [0xc0002251e8 0xc000225268 0xc0002252c0] [0xc0002251e8 0xc000225268 0xc0002252c0] [0xc000225230 0xc0002252b8] [0x9355a0 0x9355a0] 0xc001eb48a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:03:49.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:03:49.293: INFO: rc: 1 Jul 1 09:03:49.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb13b0 exit status 1 true [0xc0002252e0 0xc000225378 0xc000225440] [0xc0002252e0 0xc000225378 0xc000225440] [0xc000225338 0xc0002253d8] [0x9355a0 0x9355a0] 0xc001eb4ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:03:59.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:03:59.474: INFO: rc: 1 Jul 1 09:03:59.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020ce8a0 exit status 1 true [0xc001dda2c0 0xc001dda2f8 0xc001dda360] [0xc001dda2c0 0xc001dda2f8 0xc001dda360] [0xc001dda2e8 0xc001dda330] [0x9355a0 0x9355a0] 0xc00226c4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:04:09.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:04:09.566: INFO: rc: 1 Jul 1 09:04:09.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb1530 exit status 1 true [0xc000225478 0xc0002254a8 0xc000225530] [0xc000225478 0xc0002254a8 0xc000225530] [0xc0002254a0 0xc000225520] [0x9355a0 0x9355a0] 0xc001eb4e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:04:19.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:04:19.662: INFO: rc: 1 Jul 1 09:04:19.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb1680 exit status 1 true [0xc000225540 0xc0002255d8 0xc000225618] [0xc000225540 0xc0002255d8 0xc000225618] [0xc000225598 0xc000225610] [0x9355a0 0x9355a0] 0xc001eb55c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:04:29.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:04:29.754: INFO: rc: 1 Jul 1 09:04:29.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb1800 exit status 1 true [0xc000225628 0xc000225670 0xc000225708] [0xc000225628 0xc000225670 0xc000225708] [0xc000225658 0xc0002256e8] [0x9355a0 0x9355a0] 0xc001eb58c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:04:39.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:04:39.842: INFO: rc: 1 Jul 1 09:04:39.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002624840 exit status 1 true [0xc00115c160 0xc00115c178 0xc00115c190] [0xc00115c160 0xc00115c178 0xc00115c190] [0xc00115c170 0xc00115c188] [0x9355a0 0x9355a0] 0xc001b52420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:04:49.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:04:49.947: INFO: rc: 1 Jul 1 09:04:49.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb1920 exit status 1 true [0xc000225718 0xc0002257c8 0xc000225818] [0xc000225718 0xc0002257c8 0xc000225818] [0xc000225738 0xc000225808] [0x9355a0 0x9355a0] 0xc001eb5b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:04:59.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:05:00.046: INFO: rc: 1 Jul 1 09:05:00.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb1a70 exit status 1 true [0xc000225878 0xc000225920 0xc000225988] [0xc000225878 0xc000225920 0xc000225988] [0xc0002258b0 0xc000225978] [0x9355a0 0x9355a0] 0xc001ed09c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:05:10.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:05:10.135: INFO: rc: 1 Jul 1 09:05:10.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001046120 exit status 1 true [0xc000176000 0xc001dda040 0xc001dda080] [0xc000176000 0xc001dda040 0xc001dda080] [0xc001dda018 0xc001dda070] [0x9355a0 0x9355a0] 0xc0027f25a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:05:20.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:05:20.234: INFO: rc: 1 Jul 1 09:05:20.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003fa990 exit status 1 true [0xc000224bb8 0xc000224bf8 0xc000224cd8] [0xc000224bb8 0xc000224bf8 0xc000224cd8] [0xc000224bc8 0xc000224cc0] [0x9355a0 0x9355a0] 0xc001eb4a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:05:30.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:05:30.326: INFO: rc: 1 Jul 1 09:05:30.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0004fdc80 exit status 1 true [0xc00115c000 0xc00115c018 0xc00115c030] [0xc00115c000 0xc00115c018 0xc00115c030] [0xc00115c010 0xc00115c028] [0x9355a0 0x9355a0] 0xc001f016e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:05:40.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:05:40.430: INFO: rc: 1 Jul 1 09:05:40.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0010462a0 exit status 1 true [0xc001dda0b0 0xc001dda0e0 0xc001dda128] [0xc001dda0b0 0xc001dda0e0 0xc001dda128] [0xc001dda0d0 0xc001dda118] [0x9355a0 0x9355a0] 0xc0027f2c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:05:50.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:05:50.527: INFO: rc: 1 Jul 1 09:05:50.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0010463c0 exit status 1 true [0xc001dda158 0xc001dda1a0 0xc001dda1d8] [0xc001dda158 0xc001dda1a0 0xc001dda1d8] [0xc001dda188 0xc001dda1c8] [0x9355a0 0x9355a0] 0xc001a74060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:06:00.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:06:00.614: INFO: rc: 1 Jul 1 09:06:00.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0004fddd0 exit status 1 true [0xc00115c038 0xc00115c050 0xc00115c068] [0xc00115c038 0xc00115c050 0xc00115c068] [0xc00115c048 0xc00115c060] [0x9355a0 0x9355a0] 0xc001f01aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:06:10.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:06:10.726: INFO: rc: 1 Jul 1 09:06:10.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020ce150 exit status 1 true [0xc0024c0000 0xc0024c0018 0xc0024c0030] [0xc0024c0000 0xc0024c0018 0xc0024c0030] [0xc0024c0010 0xc0024c0028] [0x9355a0 0x9355a0] 0xc001a4d200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:06:20.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:06:22.232: INFO: rc: 1 Jul 1 09:06:22.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0010464e0 exit status 1 true [0xc001dda1f8 0xc001dda230 0xc001dda290] [0xc001dda1f8 0xc001dda230 0xc001dda290] [0xc001dda220 0xc001dda280] [0x9355a0 0x9355a0] 0xc001a74f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:06:32.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:06:32.325: INFO: rc: 1 Jul 1 09:06:32.325: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0004fdfb0 exit status 1 true [0xc00115c070 0xc00115c088 0xc00115c0a0] [0xc00115c070 0xc00115c088 0xc00115c0a0] [0xc00115c080 0xc00115c098] [0x9355a0 0x9355a0] 0xc001f01da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:06:42.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:06:42.488: INFO: rc: 1 Jul 1 09:06:42.488: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003faae0 exit status 1 true [0xc000224d78 0xc000224e08 0xc000224eb8] [0xc000224d78 0xc000224e08 0xc000224eb8] [0xc000224df0 0xc000224e90] [0x9355a0 0x9355a0] 0xc001eb4d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:06:52.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:06:52.637: INFO: rc: 1 Jul 1 09:06:52.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb0150 exit status 1 true [0xc00115c0a8 0xc00115c0c0 0xc00115c0d8] [0xc00115c0a8 0xc00115c0c0 0xc00115c0d8] [0xc00115c0b8 0xc00115c0d0] [0x9355a0 0x9355a0] 0xc00226c0c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:07:02.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:07:02.734: INFO: rc: 1 Jul 1 09:07:02.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0004fdcb0 exit status 1 true [0xc0024c0000 0xc0024c0018 0xc0024c0030] [0xc0024c0000 0xc0024c0018 0xc0024c0030] [0xc0024c0010 0xc0024c0028] [0x9355a0 0x9355a0] 0xc001f016e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:07:12.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:07:12.828: INFO: rc: 1 Jul 1 09:07:12.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0004fde90 exit status 1 true [0xc0024c0038 0xc0024c0050 0xc0024c0068] [0xc0024c0038 0xc0024c0050 0xc0024c0068] [0xc0024c0048 0xc0024c0060] [0x9355a0 0x9355a0] 0xc001f01a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:07:22.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:07:22.939: INFO: rc: 1 Jul 1 09:07:22.939: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb0060 exit status 1 true [0xc0024c0070 0xc0024c0088 0xc0024c00a0] [0xc0024c0070 0xc0024c0088 0xc0024c00a0] [0xc0024c0080 0xc0024c0098] [0x9355a0 0x9355a0] 0xc001f01d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:07:32.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:07:33.036: INFO: rc: 1 Jul 1 09:07:33.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0003fa9c0 exit status 1 true [0xc00115c000 0xc00115c018 0xc00115c030] [0xc00115c000 0xc00115c018 0xc00115c030] [0xc00115c010 0xc00115c028] [0x9355a0 0x9355a0] 0xc0027f25a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:07:43.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:07:43.127: INFO: rc: 1 Jul 1 09:07:43.127: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb01b0 exit status 1 true [0xc0024c00a8 0xc0024c00c0 0xc0024c00d8] [0xc0024c00a8 0xc0024c00c0 0xc0024c00d8] [0xc0024c00b8 0xc0024c00d0] [0x9355a0 0x9355a0] 0xc001a4c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:07:53.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:07:53.226: INFO: rc: 1 Jul 1 09:07:53.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000fb02d0 exit status 1 true [0xc0024c00e0 0xc0024c00f8 0xc0024c0110] [0xc0024c00e0 0xc0024c00f8 0xc0024c0110] [0xc0024c00f0 0xc0024c0108] [0x9355a0 0x9355a0] 0xc001a4d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:08:03.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:08:03.322: INFO: rc: 1 Jul 1 09:08:03.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001046150 exit status 1 true [0xc000224bb8 0xc000224bf8 0xc000224cd8] [0xc000224bb8 0xc000224bf8 0xc000224cd8] [0xc000224bc8 0xc000224cc0] [0x9355a0 0x9355a0] 0xc00226c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 09:08:13.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4xnsk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 09:08:13.421: INFO: rc: 1 Jul 1 09:08:13.421: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Jul 1 09:08:13.421: INFO: Scaling statefulset ss to 0 Jul 1 09:08:13.429: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 09:08:13.431: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4xnsk Jul 1 09:08:13.434: INFO: Scaling statefulset ss to 0 Jul 1 09:08:13.442: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 09:08:13.444: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:08:13.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4xnsk" for this suite. Jul 1 09:08:19.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:08:19.564: INFO: namespace: e2e-tests-statefulset-4xnsk, resource: bindings, ignored listing per whitelist Jul 1 09:08:19.612: INFO: namespace e2e-tests-statefulset-4xnsk deletion completed in 6.140629019s • [SLOW TEST:364.596 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:08:19.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 09:08:19.842: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 1 09:08:24.847: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 09:08:24.847: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 1 09:08:26.851: INFO: Creating deployment "test-rollover-deployment" Jul 1 09:08:26.867: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 1 09:08:28.874: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 1 09:08:28.880: INFO: Ensure that both replica sets have 1 created replica Jul 1 09:08:28.885: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 1 09:08:28.890: INFO: Updating deployment test-rollover-deployment Jul 1 09:08:28.890: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 1 09:08:31.069: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 1 09:08:31.075: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 1 09:08:31.081: INFO: all replica sets need to contain the pod-template-hash label Jul 1 09:08:31.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191309, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:08:33.090: INFO: all replica sets need to contain the pod-template-hash label Jul 1 09:08:33.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191311, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:08:35.088: INFO: all replica sets need to contain the pod-template-hash label Jul 1 09:08:35.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191311, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:08:37.089: INFO: all replica sets need to contain the pod-template-hash label Jul 1 09:08:37.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191311, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:08:39.090: INFO: all replica sets need to contain the pod-template-hash label Jul 1 09:08:39.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191311, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:08:41.089: INFO: all replica sets need to contain the pod-template-hash label Jul 1 09:08:41.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191311, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729191306, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:08:43.087: INFO: Jul 1 09:08:43.087: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 1 09:08:43.094: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-rtdjf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rtdjf/deployments/test-rollover-deployment,UID:6c796fdd-bb7a-11ea-99e8-0242ac110002,ResourceVersion:18837542,Generation:2,CreationTimestamp:2020-07-01 09:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-01 09:08:26 +0000 UTC 2020-07-01 09:08:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-01 09:08:42 +0000 UTC 2020-07-01 09:08:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 1 09:08:43.097: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-rtdjf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rtdjf/replicasets/test-rollover-deployment-5b8479fdb6,UID:6db0c510-bb7a-11ea-99e8-0242ac110002,ResourceVersion:18837533,Generation:2,CreationTimestamp:2020-07-01 09:08:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6c796fdd-bb7a-11ea-99e8-0242ac110002 0xc001b3fef7 0xc001b3fef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 1 09:08:43.097: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 1 09:08:43.097: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-rtdjf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rtdjf/replicasets/test-rollover-controller,UID:6846da42-bb7a-11ea-99e8-0242ac110002,ResourceVersion:18837541,Generation:2,CreationTimestamp:2020-07-01 09:08:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6c796fdd-bb7a-11ea-99e8-0242ac110002 0xc001b3fd57 0xc001b3fd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 09:08:43.098: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-rtdjf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rtdjf/replicasets/test-rollover-deployment-58494b7559,UID:6c7d4c42-bb7a-11ea-99e8-0242ac110002,ResourceVersion:18837500,Generation:2,CreationTimestamp:2020-07-01 09:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6c796fdd-bb7a-11ea-99e8-0242ac110002 0xc001b3fe27 0xc001b3fe28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 09:08:43.100: INFO: Pod "test-rollover-deployment-5b8479fdb6-lwdm9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-lwdm9,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-rtdjf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rtdjf/pods/test-rollover-deployment-5b8479fdb6-lwdm9,UID:6dc7065c-bb7a-11ea-99e8-0242ac110002,ResourceVersion:18837511,Generation:0,CreationTimestamp:2020-07-01 09:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 6db0c510-bb7a-11ea-99e8-0242ac110002 0xc001f8e647 0xc001f8e648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hfsgn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hfsgn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hfsgn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f8e790} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f8e7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:08:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:08:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:08:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.50,StartTime:2020-07-01 09:08:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-01 09:08:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f5622c05aba946959a3cb13287f493cb32b8e0553a09882e86ed75bc47b730a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:08:43.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rtdjf" for this suite. Jul 1 09:08:51.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:08:51.188: INFO: namespace: e2e-tests-deployment-rtdjf, resource: bindings, ignored listing per whitelist Jul 1 09:08:51.231: INFO: namespace e2e-tests-deployment-rtdjf deletion completed in 8.127428786s • [SLOW TEST:31.618 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:08:51.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7b1161eb-bb7a-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 09:08:51.383: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-bzbc2" to be "success or failure" Jul 1 09:08:51.401: INFO: Pod "pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.511989ms Jul 1 09:08:53.405: INFO: Pod "pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022279598s Jul 1 09:08:55.629: INFO: Pod "pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245964094s Jul 1 09:08:57.634: INFO: Pod "pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.251184554s STEP: Saw pod success Jul 1 09:08:57.634: INFO: Pod "pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:08:57.637: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 09:08:57.653: INFO: Waiting for pod pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018 to disappear Jul 1 09:08:57.676: INFO: Pod pod-projected-configmaps-7b140270-bb7a-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:08:57.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bzbc2" for this suite. Jul 1 09:09:03.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:09:03.781: INFO: namespace: e2e-tests-projected-bzbc2, resource: bindings, ignored listing per whitelist Jul 1 09:09:03.819: INFO: namespace e2e-tests-projected-bzbc2 deletion completed in 6.140356685s • [SLOW TEST:12.588 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:09:03.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-82b26be3-bb7a-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 09:09:04.151: INFO: Waiting up to 5m0s for pod "pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-9htwp" to be "success or failure" Jul 1 09:09:04.309: INFO: Pod "pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 157.884972ms Jul 1 09:09:06.313: INFO: Pod "pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161329323s Jul 1 09:09:08.316: INFO: Pod "pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.164617129s Jul 1 09:09:10.320: INFO: Pod "pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168870489s STEP: Saw pod success Jul 1 09:09:10.320: INFO: Pod "pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:09:10.323: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 09:09:10.430: INFO: Waiting for pod pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018 to disappear Jul 1 09:09:10.485: INFO: Pod pod-secrets-82b3a01d-bb7a-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:09:10.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9htwp" for this suite. Jul 1 09:09:16.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:09:16.560: INFO: namespace: e2e-tests-secrets-9htwp, resource: bindings, ignored listing per whitelist Jul 1 09:09:16.594: INFO: namespace e2e-tests-secrets-9htwp deletion completed in 6.1049294s • [SLOW TEST:12.774 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:09:16.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 1 09:09:17.139: INFO: Waiting up to 5m0s for pod "pod-8a707078-bb7a-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-vs74d" to be "success or failure" Jul 1 09:09:17.143: INFO: Pod "pod-8a707078-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08653ms Jul 1 09:09:19.147: INFO: Pod "pod-8a707078-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008156379s Jul 1 09:09:21.151: INFO: Pod "pod-8a707078-bb7a-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.011198689s Jul 1 09:09:23.154: INFO: Pod "pod-8a707078-bb7a-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014910144s STEP: Saw pod success Jul 1 09:09:23.154: INFO: Pod "pod-8a707078-bb7a-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:09:23.156: INFO: Trying to get logs from node hunter-worker2 pod pod-8a707078-bb7a-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 09:09:23.196: INFO: Waiting for pod pod-8a707078-bb7a-11ea-a133-0242ac110018 to disappear Jul 1 09:09:23.303: INFO: Pod pod-8a707078-bb7a-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:09:23.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vs74d" for this suite. Jul 1 09:09:29.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:09:29.377: INFO: namespace: e2e-tests-emptydir-vs74d, resource: bindings, ignored listing per whitelist Jul 1 09:09:29.388: INFO: namespace e2e-tests-emptydir-vs74d deletion completed in 6.080904669s • [SLOW TEST:12.794 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:09:29.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 09:09:29.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018" in namespace "e2e-tests-downward-api-27jdb" to be "success or failure" Jul 1 09:09:29.599: INFO: Pod "downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.384434ms Jul 1 09:09:31.604: INFO: Pod "downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010321452s Jul 1 09:09:33.608: INFO: Pod "downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014807422s STEP: Saw pod success Jul 1 09:09:33.608: INFO: Pod "downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:09:33.611: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 09:09:34.001: INFO: Waiting for pod downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018 to disappear Jul 1 09:09:34.018: INFO: Pod downwardapi-volume-91de19c0-bb7a-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:09:34.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-27jdb" for this suite. Jul 1 09:09:42.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:09:42.359: INFO: namespace: e2e-tests-downward-api-27jdb, resource: bindings, ignored listing per whitelist Jul 1 09:09:42.364: INFO: namespace e2e-tests-downward-api-27jdb deletion completed in 8.341995551s • [SLOW TEST:12.976 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:09:42.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-5d9dq I0701 09:09:42.470855 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-5d9dq, replica count: 1 I0701 09:09:43.521322 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 09:09:44.521529 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 09:09:45.521715 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 09:09:46.004: INFO: Created: latency-svc-7wbv4 Jul 1 09:09:46.189: INFO: Got endpoints: latency-svc-7wbv4 [568.03961ms] Jul 1 09:09:46.234: INFO: Created: latency-svc-rbd8g Jul 1 09:09:46.241: INFO: Got endpoints: latency-svc-rbd8g [51.755942ms] Jul 1 09:09:46.270: INFO: Created: latency-svc-v4mdw Jul 1 09:09:46.330: INFO: Got endpoints: latency-svc-v4mdw [140.440878ms] Jul 1 09:09:46.379: INFO: Created: latency-svc-qfvs8 Jul 1 09:09:46.477: INFO: Got endpoints: latency-svc-qfvs8 [286.721403ms] Jul 1 09:09:46.479: INFO: Created: latency-svc-t5tjr Jul 1 09:09:46.494: INFO: Got endpoints: latency-svc-t5tjr [303.515581ms] Jul 1 09:09:46.565: INFO: Created: latency-svc-wdl6r Jul 1 09:09:46.621: INFO: Got endpoints: latency-svc-wdl6r [431.049734ms] Jul 1 09:09:46.624: INFO: Created: latency-svc-fd9fs Jul 1 09:09:46.638: INFO: Got endpoints: latency-svc-fd9fs [448.184659ms] Jul 1 09:09:46.666: INFO: Created: latency-svc-lgztx Jul 1 09:09:46.680: INFO: Got endpoints: latency-svc-lgztx [489.787033ms] Jul 1 09:09:46.702: INFO: Created: latency-svc-w6nlv Jul 1 09:09:46.716: INFO: Got endpoints: latency-svc-w6nlv [526.246644ms] Jul 1 09:09:46.779: INFO: Created: latency-svc-5ldjp Jul 1 09:09:46.806: INFO: Got endpoints: latency-svc-5ldjp [615.433077ms] Jul 1 09:09:46.846: INFO: Created: latency-svc-x89bw Jul 1 09:09:46.867: INFO: Got endpoints: latency-svc-x89bw [677.239375ms] Jul 1 09:09:46.944: INFO: Created: latency-svc-dfd9n Jul 1 09:09:46.958: INFO: Got endpoints: latency-svc-dfd9n [767.839075ms] Jul 1 09:09:47.002: INFO: Created: latency-svc-bc5h2 Jul 1 09:09:47.018: INFO: Got endpoints: latency-svc-bc5h2 [828.204288ms] Jul 1 09:09:47.038: INFO: Created: latency-svc-tjftj Jul 1 09:09:47.102: INFO: Got endpoints: latency-svc-tjftj [912.325169ms] Jul 1 09:09:47.140: INFO: Created: latency-svc-t7f6k Jul 1 09:09:47.157: INFO: Got endpoints: latency-svc-t7f6k [966.775504ms] Jul 1 09:09:47.268: INFO: Created: latency-svc-klv47 Jul 1 09:09:47.282: INFO: Got endpoints: latency-svc-klv47 [1.092328796s] Jul 1 09:09:47.308: INFO: Created: latency-svc-vjmvd Jul 1 09:09:47.337: INFO: Got endpoints: latency-svc-vjmvd [1.095599498s] Jul 1 09:09:47.356: INFO: Created: latency-svc-vdxkb Jul 1 09:09:47.405: INFO: Got endpoints: latency-svc-vdxkb [1.075030035s] Jul 1 09:09:47.416: INFO: Created: latency-svc-d6xwj Jul 1 09:09:47.434: INFO: Got endpoints: latency-svc-d6xwj [957.115256ms] Jul 1 09:09:47.464: INFO: Created: latency-svc-mvx64 Jul 1 09:09:47.494: INFO: Got endpoints: latency-svc-mvx64 [1.000451988s] Jul 1 09:09:47.556: INFO: Created: latency-svc-tld7k Jul 1 09:09:47.578: INFO: Got endpoints: latency-svc-tld7k [957.270887ms] Jul 1 09:09:47.620: INFO: Created: latency-svc-59stl Jul 1 09:09:47.710: INFO: Got endpoints: latency-svc-59stl [1.072328407s] Jul 1 09:09:47.722: INFO: Created: latency-svc-rj8nt Jul 1 09:09:47.741: INFO: Got endpoints: latency-svc-rj8nt [1.061416022s] Jul 1 09:09:47.770: INFO: Created: latency-svc-tmr4x Jul 1 09:09:47.789: INFO: Got endpoints: latency-svc-tmr4x [1.073093251s] Jul 1 09:09:47.878: INFO: Created: latency-svc-tcnb7 Jul 1 09:09:47.883: INFO: Got endpoints: latency-svc-tcnb7 [1.077052494s] Jul 1 09:09:47.910: INFO: Created: latency-svc-7hvx7 Jul 1 09:09:47.922: INFO: Got endpoints: latency-svc-7hvx7 [1.054808094s] Jul 1 09:09:47.956: INFO: Created: latency-svc-qbsxc Jul 1 09:09:47.970: INFO: Got endpoints: latency-svc-qbsxc [1.012264029s] Jul 1 09:09:48.034: INFO: Created: latency-svc-4rgl9 Jul 1 09:09:48.065: INFO: Got endpoints: latency-svc-4rgl9 [1.046680277s] Jul 1 09:09:48.094: INFO: Created: latency-svc-tj9mk Jul 1 09:09:48.190: INFO: Got endpoints: latency-svc-tj9mk [1.087236767s] Jul 1 09:09:48.207: INFO: Created: latency-svc-ct5d7 Jul 1 09:09:48.236: INFO: Got endpoints: latency-svc-ct5d7 [1.07866438s] Jul 1 09:09:48.411: INFO: Created: latency-svc-ckxsz Jul 1 09:09:48.428: INFO: Got endpoints: latency-svc-ckxsz [1.145413296s] Jul 1 09:09:48.453: INFO: Created: latency-svc-h57d9 Jul 1 09:09:48.464: INFO: Got endpoints: latency-svc-h57d9 [1.126642564s] Jul 1 09:09:48.555: INFO: Created: latency-svc-8ck6k Jul 1 09:09:48.560: INFO: Got endpoints: latency-svc-8ck6k [1.154314889s] Jul 1 09:09:48.615: INFO: Created: latency-svc-kg9vz Jul 1 09:09:48.626: INFO: Got endpoints: latency-svc-kg9vz [1.192331329s] Jul 1 09:09:48.704: INFO: Created: latency-svc-fzp7j Jul 1 09:09:48.708: INFO: Got endpoints: latency-svc-fzp7j [1.213395335s] Jul 1 09:09:48.742: INFO: Created: latency-svc-xdwx5 Jul 1 09:09:48.759: INFO: Got endpoints: latency-svc-xdwx5 [1.181042688s] Jul 1 09:09:48.854: INFO: Created: latency-svc-b9nsb Jul 1 09:09:48.862: INFO: Got endpoints: latency-svc-b9nsb [1.151647584s] Jul 1 09:09:48.898: INFO: Created: latency-svc-rw7wb Jul 1 09:09:48.916: INFO: Got endpoints: latency-svc-rw7wb [1.174669373s] Jul 1 09:09:49.053: INFO: Created: latency-svc-lctc6 Jul 1 09:09:49.056: INFO: Got endpoints: latency-svc-lctc6 [1.266489372s] Jul 1 09:09:49.103: INFO: Created: latency-svc-t8cpn Jul 1 09:09:49.138: INFO: Got endpoints: latency-svc-t8cpn [1.255748518s] Jul 1 09:09:49.225: INFO: Created: latency-svc-4ccj7 Jul 1 09:09:49.275: INFO: Got endpoints: latency-svc-4ccj7 [1.352855351s] Jul 1 09:09:49.276: INFO: Created: latency-svc-xp7pg Jul 1 09:09:49.312: INFO: Got endpoints: latency-svc-xp7pg [1.341631162s] Jul 1 09:09:49.394: INFO: Created: latency-svc-b4pwx Jul 1 09:09:49.410: INFO: Got endpoints: latency-svc-b4pwx [1.345626278s] Jul 1 09:09:49.474: INFO: Created: latency-svc-mlb92 Jul 1 09:09:49.483: INFO: Got endpoints: latency-svc-mlb92 [1.293442132s] Jul 1 09:09:49.595: INFO: Created: latency-svc-lv4pw Jul 1 09:09:49.654: INFO: Got endpoints: latency-svc-lv4pw [1.418346996s] Jul 1 09:09:49.758: INFO: Created: latency-svc-k424n Jul 1 09:09:49.764: INFO: Got endpoints: latency-svc-k424n [1.336505016s] Jul 1 09:09:49.816: INFO: Created: latency-svc-cxfrh Jul 1 09:09:49.903: INFO: Got endpoints: latency-svc-cxfrh [1.438677264s] Jul 1 09:09:49.905: INFO: Created: latency-svc-j4vhm Jul 1 09:09:49.916: INFO: Got endpoints: latency-svc-j4vhm [1.355847069s] Jul 1 09:09:49.949: INFO: Created: latency-svc-gbcwv Jul 1 09:09:49.957: INFO: Got endpoints: latency-svc-gbcwv [1.330976158s] Jul 1 09:09:50.088: INFO: Created: latency-svc-pfcws Jul 1 09:09:50.128: INFO: Got endpoints: latency-svc-pfcws [1.420782047s] Jul 1 09:09:50.129: INFO: Created: latency-svc-9qtnt Jul 1 09:09:50.139: INFO: Got endpoints: latency-svc-9qtnt [1.379110981s] Jul 1 09:09:50.250: INFO: Created: latency-svc-w7p5z Jul 1 09:09:50.447: INFO: Got endpoints: latency-svc-w7p5z [1.584847267s] Jul 1 09:09:50.450: INFO: Created: latency-svc-jhghb Jul 1 09:09:50.464: INFO: Got endpoints: latency-svc-jhghb [1.547793803s] Jul 1 09:09:50.489: INFO: Created: latency-svc-zg9hf Jul 1 09:09:50.530: INFO: Got endpoints: latency-svc-zg9hf [1.474171116s] Jul 1 09:09:50.635: INFO: Created: latency-svc-6bdnt Jul 1 09:09:50.666: INFO: Got endpoints: latency-svc-6bdnt [1.527925164s] Jul 1 09:09:50.692: INFO: Created: latency-svc-k9sxr Jul 1 09:09:50.710: INFO: Got endpoints: latency-svc-k9sxr [1.434606783s] Jul 1 09:09:50.806: INFO: Created: latency-svc-j9lsh Jul 1 09:09:50.825: INFO: Got endpoints: latency-svc-j9lsh [1.513084098s] Jul 1 09:09:50.880: INFO: Created: latency-svc-rv7kw Jul 1 09:09:50.902: INFO: Got endpoints: latency-svc-rv7kw [1.491836869s] Jul 1 09:09:50.970: INFO: Created: latency-svc-zrqzv Jul 1 09:09:50.972: INFO: Got endpoints: latency-svc-zrqzv [1.488380243s] Jul 1 09:09:51.023: INFO: Created: latency-svc-pwtjp Jul 1 09:09:51.047: INFO: Got endpoints: latency-svc-pwtjp [1.392941357s] Jul 1 09:09:51.148: INFO: Created: latency-svc-vqvnt Jul 1 09:09:51.181: INFO: Got endpoints: latency-svc-vqvnt [1.416599911s] Jul 1 09:09:51.227: INFO: Created: latency-svc-lblll Jul 1 09:09:51.240: INFO: Got endpoints: latency-svc-lblll [1.336965691s] Jul 1 09:09:51.306: INFO: Created: latency-svc-c5bxx Jul 1 09:09:51.312: INFO: Got endpoints: latency-svc-c5bxx [1.396943582s] Jul 1 09:09:51.383: INFO: Created: latency-svc-tdmwd Jul 1 09:09:51.402: INFO: Got endpoints: latency-svc-tdmwd [1.444723438s] Jul 1 09:09:51.465: INFO: Created: latency-svc-7sdpv Jul 1 09:09:51.472: INFO: Got endpoints: latency-svc-7sdpv [1.343182352s] Jul 1 09:09:51.516: INFO: Created: latency-svc-qn4h7 Jul 1 09:09:51.535: INFO: Got endpoints: latency-svc-qn4h7 [1.396883704s] Jul 1 09:09:51.603: INFO: Created: latency-svc-n6k6s Jul 1 09:09:51.605: INFO: Got endpoints: latency-svc-n6k6s [1.158351662s] Jul 1 09:09:51.690: INFO: Created: latency-svc-4nw7r Jul 1 09:09:51.698: INFO: Got endpoints: latency-svc-4nw7r [1.234236831s] Jul 1 09:09:51.794: INFO: Created: latency-svc-2fch6 Jul 1 09:09:51.797: INFO: Got endpoints: latency-svc-2fch6 [1.266869378s] Jul 1 09:09:51.829: INFO: Created: latency-svc-skzvb Jul 1 09:09:51.843: INFO: Got endpoints: latency-svc-skzvb [1.176994043s] Jul 1 09:09:51.863: INFO: Created: latency-svc-6tjsj Jul 1 09:09:52.016: INFO: Got endpoints: latency-svc-6tjsj [1.305919935s] Jul 1 09:09:52.018: INFO: Created: latency-svc-zfbgd Jul 1 09:09:52.029: INFO: Got endpoints: latency-svc-zfbgd [1.204220681s] Jul 1 09:09:52.097: INFO: Created: latency-svc-6xn6b Jul 1 09:09:52.184: INFO: Got endpoints: latency-svc-6xn6b [1.281796373s] Jul 1 09:09:52.187: INFO: Created: latency-svc-qbqph Jul 1 09:09:52.222: INFO: Got endpoints: latency-svc-qbqph [1.250643432s] Jul 1 09:09:52.323: INFO: Created: latency-svc-hx4bz Jul 1 09:09:52.327: INFO: Got endpoints: latency-svc-hx4bz [1.279654883s] Jul 1 09:09:52.409: INFO: Created: latency-svc-f6svr Jul 1 09:09:52.486: INFO: Got endpoints: latency-svc-f6svr [1.304352305s] Jul 1 09:09:52.498: INFO: Created: latency-svc-mdzqj Jul 1 09:09:52.517: INFO: Got endpoints: latency-svc-mdzqj [1.277480838s] Jul 1 09:09:52.663: INFO: Created: latency-svc-nvmpk Jul 1 09:09:52.667: INFO: Got endpoints: latency-svc-nvmpk [1.354185128s] Jul 1 09:09:52.860: INFO: Created: latency-svc-lhpv2 Jul 1 09:09:52.863: INFO: Got endpoints: latency-svc-lhpv2 [1.460517456s] Jul 1 09:09:52.907: INFO: Created: latency-svc-r9wrs Jul 1 09:09:52.926: INFO: Got endpoints: latency-svc-r9wrs [1.454181664s] Jul 1 09:09:52.954: INFO: Created: latency-svc-szllz Jul 1 09:09:53.064: INFO: Got endpoints: latency-svc-szllz [1.528108195s] Jul 1 09:09:53.068: INFO: Created: latency-svc-b42lb Jul 1 09:09:53.104: INFO: Got endpoints: latency-svc-b42lb [1.498438946s] Jul 1 09:09:53.147: INFO: Created: latency-svc-tt77k Jul 1 09:09:53.279: INFO: Got endpoints: latency-svc-tt77k [1.581206278s] Jul 1 09:09:53.310: INFO: Created: latency-svc-bpvgv Jul 1 09:09:53.328: INFO: Got endpoints: latency-svc-bpvgv [1.530418689s] Jul 1 09:09:53.447: INFO: Created: latency-svc-2mqdj Jul 1 09:09:53.460: INFO: Got endpoints: latency-svc-2mqdj [1.616718374s] Jul 1 09:09:53.609: INFO: Created: latency-svc-fh5cd Jul 1 09:09:53.623: INFO: Got endpoints: latency-svc-fh5cd [1.607014078s] Jul 1 09:09:53.694: INFO: Created: latency-svc-4lrlx Jul 1 09:09:53.770: INFO: Got endpoints: latency-svc-4lrlx [1.740574967s] Jul 1 09:09:53.824: INFO: Created: latency-svc-bhbhk Jul 1 09:09:53.845: INFO: Got endpoints: latency-svc-bhbhk [1.660301237s] Jul 1 09:09:53.866: INFO: Created: latency-svc-srj72 Jul 1 09:09:53.944: INFO: Got endpoints: latency-svc-srj72 [1.721698585s] Jul 1 09:09:53.948: INFO: Created: latency-svc-l9kjh Jul 1 09:09:53.965: INFO: Got endpoints: latency-svc-l9kjh [1.638379972s] Jul 1 09:09:53.998: INFO: Created: latency-svc-kzdjh Jul 1 09:09:54.013: INFO: Got endpoints: latency-svc-kzdjh [1.527849657s] Jul 1 09:09:54.076: INFO: Created: latency-svc-r7zzn Jul 1 09:09:54.100: INFO: Got endpoints: latency-svc-r7zzn [1.582697802s] Jul 1 09:09:54.148: INFO: Created: latency-svc-7gv7k Jul 1 09:09:54.282: INFO: Got endpoints: latency-svc-7gv7k [1.614739682s] Jul 1 09:09:54.292: INFO: Created: latency-svc-rb7gn Jul 1 09:09:54.316: INFO: Got endpoints: latency-svc-rb7gn [1.452945922s] Jul 1 09:09:54.384: INFO: Created: latency-svc-zhn7x Jul 1 09:09:54.429: INFO: Got endpoints: latency-svc-zhn7x [1.503066649s] Jul 1 09:09:54.472: INFO: Created: latency-svc-ktnz9 Jul 1 09:09:54.489: INFO: Got endpoints: latency-svc-ktnz9 [1.425406188s] Jul 1 09:09:54.522: INFO: Created: latency-svc-7gh6m Jul 1 09:09:54.573: INFO: Got endpoints: latency-svc-7gh6m [1.469262218s] Jul 1 09:09:54.623: INFO: Created: latency-svc-g6sqn Jul 1 09:09:54.640: INFO: Got endpoints: latency-svc-g6sqn [1.360411342s] Jul 1 09:09:54.753: INFO: Created: latency-svc-cmnls Jul 1 09:09:54.757: INFO: Got endpoints: latency-svc-cmnls [1.429391782s] Jul 1 09:09:54.820: INFO: Created: latency-svc-s9xkd Jul 1 09:09:54.838: INFO: Got endpoints: latency-svc-s9xkd [1.378001083s] Jul 1 09:09:54.922: INFO: Created: latency-svc-r98ff Jul 1 09:09:54.943: INFO: Got endpoints: latency-svc-r98ff [1.32060761s] Jul 1 09:09:54.966: INFO: Created: latency-svc-fhxpw Jul 1 09:09:54.983: INFO: Got endpoints: latency-svc-fhxpw [1.21290233s] Jul 1 09:09:55.010: INFO: Created: latency-svc-f2rwk Jul 1 09:09:55.020: INFO: Got endpoints: latency-svc-f2rwk [1.17488601s] Jul 1 09:09:55.124: INFO: Created: latency-svc-kzckw Jul 1 09:09:55.158: INFO: Got endpoints: latency-svc-kzckw [1.213833347s] Jul 1 09:09:55.197: INFO: Created: latency-svc-kgwqc Jul 1 09:09:55.289: INFO: Got endpoints: latency-svc-kgwqc [1.323844086s] Jul 1 09:09:55.294: INFO: Created: latency-svc-dn8tn Jul 1 09:09:55.308: INFO: Got endpoints: latency-svc-dn8tn [1.294569272s] Jul 1 09:09:55.336: INFO: Created: latency-svc-n78nn Jul 1 09:09:55.357: INFO: Got endpoints: latency-svc-n78nn [1.256678417s] Jul 1 09:09:55.466: INFO: Created: latency-svc-tl2g4 Jul 1 09:09:55.477: INFO: Got endpoints: latency-svc-tl2g4 [1.195429263s] Jul 1 09:09:55.559: INFO: Created: latency-svc-wrwhs Jul 1 09:09:55.638: INFO: Got endpoints: latency-svc-wrwhs [1.322627659s] Jul 1 09:09:55.640: INFO: Created: latency-svc-qwffr Jul 1 09:09:55.658: INFO: Got endpoints: latency-svc-qwffr [1.229132678s] Jul 1 09:09:55.691: INFO: Created: latency-svc-d49sr Jul 1 09:09:55.706: INFO: Got endpoints: latency-svc-d49sr [1.217142961s] Jul 1 09:09:55.732: INFO: Created: latency-svc-dd8gh Jul 1 09:09:55.812: INFO: Got endpoints: latency-svc-dd8gh [1.239229536s] Jul 1 09:09:55.852: INFO: Created: latency-svc-wl57h Jul 1 09:09:55.869: INFO: Got endpoints: latency-svc-wl57h [1.229256114s] Jul 1 09:09:55.895: INFO: Created: latency-svc-m5v7j Jul 1 09:09:55.911: INFO: Got endpoints: latency-svc-m5v7j [1.153918398s] Jul 1 09:09:55.980: INFO: Created: latency-svc-882tp Jul 1 09:09:56.013: INFO: Got endpoints: latency-svc-882tp [1.175161572s] Jul 1 09:09:56.069: INFO: Created: latency-svc-wq8cl Jul 1 09:09:56.168: INFO: Got endpoints: latency-svc-wq8cl [1.223977971s] Jul 1 09:09:56.177: INFO: Created: latency-svc-6rvzb Jul 1 09:09:56.194: INFO: Got endpoints: latency-svc-6rvzb [1.211120119s] Jul 1 09:09:56.248: INFO: Created: latency-svc-djbb6 Jul 1 09:09:56.323: INFO: Created: latency-svc-5hwtx Jul 1 09:09:56.333: INFO: Got endpoints: latency-svc-5hwtx [1.174997139s] Jul 1 09:09:56.333: INFO: Got endpoints: latency-svc-djbb6 [1.313232719s] Jul 1 09:09:56.373: INFO: Created: latency-svc-vhsxj Jul 1 09:09:56.393: INFO: Got endpoints: latency-svc-vhsxj [1.104064266s] Jul 1 09:09:56.459: INFO: Created: latency-svc-zsgd7 Jul 1 09:09:56.461: INFO: Got endpoints: latency-svc-zsgd7 [1.152967553s] Jul 1 09:09:56.554: INFO: Created: latency-svc-7zwqp Jul 1 09:09:56.644: INFO: Got endpoints: latency-svc-7zwqp [1.287919385s] Jul 1 09:09:56.647: INFO: Created: latency-svc-d4snp Jul 1 09:09:56.663: INFO: Got endpoints: latency-svc-d4snp [1.185477342s] Jul 1 09:09:56.722: INFO: Created: latency-svc-kp2z2 Jul 1 09:09:56.737: INFO: Got endpoints: latency-svc-kp2z2 [1.098524309s] Jul 1 09:09:56.818: INFO: Created: latency-svc-l2dbk Jul 1 09:09:56.866: INFO: Got endpoints: latency-svc-l2dbk [1.207290835s] Jul 1 09:09:56.908: INFO: Created: latency-svc-7qvf5 Jul 1 09:09:56.986: INFO: Got endpoints: latency-svc-7qvf5 [1.279564872s] Jul 1 09:09:57.000: INFO: Created: latency-svc-sgfmb Jul 1 09:09:57.013: INFO: Got endpoints: latency-svc-sgfmb [1.200858566s] Jul 1 09:09:57.046: INFO: Created: latency-svc-zl26x Jul 1 09:09:57.166: INFO: Got endpoints: latency-svc-zl26x [1.296407375s] Jul 1 09:09:57.167: INFO: Created: latency-svc-wf2jb Jul 1 09:09:57.179: INFO: Got endpoints: latency-svc-wf2jb [1.268386695s] Jul 1 09:09:57.323: INFO: Created: latency-svc-288s8 Jul 1 09:09:57.327: INFO: Got endpoints: latency-svc-288s8 [1.313789766s] Jul 1 09:09:57.382: INFO: Created: latency-svc-82s8m Jul 1 09:09:57.401: INFO: Got endpoints: latency-svc-82s8m [1.23340871s] Jul 1 09:09:57.508: INFO: Created: latency-svc-wblxq Jul 1 09:09:57.544: INFO: Got endpoints: latency-svc-wblxq [1.349655742s] Jul 1 09:09:57.591: INFO: Created: latency-svc-xqzth Jul 1 09:09:57.687: INFO: Got endpoints: latency-svc-xqzth [1.353578886s] Jul 1 09:09:57.702: INFO: Created: latency-svc-pl86g Jul 1 09:09:57.726: INFO: Got endpoints: latency-svc-pl86g [1.3933825s] Jul 1 09:09:57.778: INFO: Created: latency-svc-zqbh7 Jul 1 09:09:57.842: INFO: Got endpoints: latency-svc-zqbh7 [1.449183784s] Jul 1 09:09:57.879: INFO: Created: latency-svc-z796v Jul 1 09:09:57.895: INFO: Got endpoints: latency-svc-z796v [1.433366324s] Jul 1 09:09:57.922: INFO: Created: latency-svc-wh8l6 Jul 1 09:09:57.943: INFO: Got endpoints: latency-svc-wh8l6 [1.298346748s] Jul 1 09:09:58.040: INFO: Created: latency-svc-8n48b Jul 1 09:09:58.079: INFO: Created: latency-svc-qznqj Jul 1 09:09:58.079: INFO: Got endpoints: latency-svc-8n48b [1.415987146s] Jul 1 09:09:58.525: INFO: Got endpoints: latency-svc-qznqj [1.788356613s] Jul 1 09:09:58.969: INFO: Created: latency-svc-9kfrw Jul 1 09:09:59.020: INFO: Got endpoints: latency-svc-9kfrw [2.154099993s] Jul 1 09:09:59.067: INFO: Created: latency-svc-lxpch Jul 1 09:09:59.165: INFO: Got endpoints: latency-svc-lxpch [2.179364529s] Jul 1 09:09:59.210: INFO: Created: latency-svc-56b64 Jul 1 09:09:59.240: INFO: Got endpoints: latency-svc-56b64 [2.226223549s] Jul 1 09:09:59.345: INFO: Created: latency-svc-fdlqh Jul 1 09:09:59.353: INFO: Got endpoints: latency-svc-fdlqh [2.187394404s] Jul 1 09:09:59.396: INFO: Created: latency-svc-p78mq Jul 1 09:09:59.426: INFO: Got endpoints: latency-svc-p78mq [2.246206104s] Jul 1 09:09:59.543: INFO: Created: latency-svc-278c7 Jul 1 09:09:59.546: INFO: Got endpoints: latency-svc-278c7 [2.218735443s] Jul 1 09:09:59.627: INFO: Created: latency-svc-9c2zd Jul 1 09:09:59.643: INFO: Got endpoints: latency-svc-9c2zd [2.241484326s] Jul 1 09:09:59.699: INFO: Created: latency-svc-k7mg7 Jul 1 09:09:59.702: INFO: Got endpoints: latency-svc-k7mg7 [2.158379842s] Jul 1 09:09:59.860: INFO: Created: latency-svc-prvc2 Jul 1 09:09:59.888: INFO: Got endpoints: latency-svc-prvc2 [2.201600903s] Jul 1 09:09:59.931: INFO: Created: latency-svc-vz2vs Jul 1 09:09:59.950: INFO: Got endpoints: latency-svc-vz2vs [2.223195592s] Jul 1 09:10:00.226: INFO: Created: latency-svc-drr4b Jul 1 09:10:00.234: INFO: Got endpoints: latency-svc-drr4b [2.391921844s] Jul 1 09:10:01.628: INFO: Created: latency-svc-c9hns Jul 1 09:10:01.660: INFO: Got endpoints: latency-svc-c9hns [3.765652586s] Jul 1 09:10:01.700: INFO: Created: latency-svc-lfjtx Jul 1 09:10:01.818: INFO: Got endpoints: latency-svc-lfjtx [3.875022912s] Jul 1 09:10:01.872: INFO: Created: latency-svc-4czfb Jul 1 09:10:01.891: INFO: Got endpoints: latency-svc-4czfb [3.8125903s] Jul 1 09:10:01.993: INFO: Created: latency-svc-xn6xm Jul 1 09:10:01.996: INFO: Got endpoints: latency-svc-xn6xm [3.470325732s] Jul 1 09:10:02.088: INFO: Created: latency-svc-28bql Jul 1 09:10:02.162: INFO: Got endpoints: latency-svc-28bql [3.142193876s] Jul 1 09:10:02.197: INFO: Created: latency-svc-qfs5f Jul 1 09:10:02.220: INFO: Got endpoints: latency-svc-qfs5f [3.054660497s] Jul 1 09:10:02.351: INFO: Created: latency-svc-74c4r Jul 1 09:10:02.354: INFO: Got endpoints: latency-svc-74c4r [3.114169748s] Jul 1 09:10:02.406: INFO: Created: latency-svc-xch5c Jul 1 09:10:02.448: INFO: Got endpoints: latency-svc-xch5c [3.095200795s] Jul 1 09:10:02.531: INFO: Created: latency-svc-hqfxp Jul 1 09:10:02.538: INFO: Got endpoints: latency-svc-hqfxp [3.112745546s] Jul 1 09:10:02.592: INFO: Created: latency-svc-5s67s Jul 1 09:10:02.599: INFO: Got endpoints: latency-svc-5s67s [3.052947932s] Jul 1 09:10:02.670: INFO: Created: latency-svc-z2c99 Jul 1 09:10:02.678: INFO: Got endpoints: latency-svc-z2c99 [3.035100287s] Jul 1 09:10:02.730: INFO: Created: latency-svc-x7cxg Jul 1 09:10:02.806: INFO: Got endpoints: latency-svc-x7cxg [3.103793382s] Jul 1 09:10:02.844: INFO: Created: latency-svc-z225c Jul 1 09:10:02.858: INFO: Got endpoints: latency-svc-z225c [2.970014763s] Jul 1 09:10:02.904: INFO: Created: latency-svc-5m48d Jul 1 09:10:02.968: INFO: Got endpoints: latency-svc-5m48d [3.018202276s] Jul 1 09:10:02.972: INFO: Created: latency-svc-pmbc5 Jul 1 09:10:02.985: INFO: Got endpoints: latency-svc-pmbc5 [2.750639373s] Jul 1 09:10:03.031: INFO: Created: latency-svc-22h2w Jul 1 09:10:03.039: INFO: Got endpoints: latency-svc-22h2w [1.378933074s] Jul 1 09:10:03.144: INFO: Created: latency-svc-k8n4d Jul 1 09:10:03.156: INFO: Got endpoints: latency-svc-k8n4d [1.337599372s] Jul 1 09:10:03.192: INFO: Created: latency-svc-ldtbc Jul 1 09:10:03.210: INFO: Got endpoints: latency-svc-ldtbc [1.318401728s] Jul 1 09:10:03.235: INFO: Created: latency-svc-kb5b7 Jul 1 09:10:03.328: INFO: Got endpoints: latency-svc-kb5b7 [1.331931679s] Jul 1 09:10:03.331: INFO: Created: latency-svc-5nhbb Jul 1 09:10:03.353: INFO: Got endpoints: latency-svc-5nhbb [1.190716557s] Jul 1 09:10:03.483: INFO: Created: latency-svc-dr4cv Jul 1 09:10:03.497: INFO: Got endpoints: latency-svc-dr4cv [1.277122243s] Jul 1 09:10:03.534: INFO: Created: latency-svc-ltbnp Jul 1 09:10:03.552: INFO: Got endpoints: latency-svc-ltbnp [1.197829677s] Jul 1 09:10:03.582: INFO: Created: latency-svc-p52t9 Jul 1 09:10:03.652: INFO: Got endpoints: latency-svc-p52t9 [1.203580882s] Jul 1 09:10:03.735: INFO: Created: latency-svc-b2q7d Jul 1 09:10:03.858: INFO: Got endpoints: latency-svc-b2q7d [1.319894628s] Jul 1 09:10:03.920: INFO: Created: latency-svc-vd8dh Jul 1 09:10:03.986: INFO: Got endpoints: latency-svc-vd8dh [1.387312935s] Jul 1 09:10:04.008: INFO: Created: latency-svc-d2c6c Jul 1 09:10:04.012: INFO: Got endpoints: latency-svc-d2c6c [1.334563622s] Jul 1 09:10:04.080: INFO: Created: latency-svc-nl9hs Jul 1 09:10:04.166: INFO: Got endpoints: latency-svc-nl9hs [1.359912206s] Jul 1 09:10:04.183: INFO: Created: latency-svc-rfv2n Jul 1 09:10:04.219: INFO: Got endpoints: latency-svc-rfv2n [1.361021884s] Jul 1 09:10:04.316: INFO: Created: latency-svc-749zn Jul 1 09:10:04.333: INFO: Got endpoints: latency-svc-749zn [1.365032723s] Jul 1 09:10:04.398: INFO: Created: latency-svc-vcg7t Jul 1 09:10:04.513: INFO: Got endpoints: latency-svc-vcg7t [1.528143341s] Jul 1 09:10:04.517: INFO: Created: latency-svc-4d2vj Jul 1 09:10:04.545: INFO: Got endpoints: latency-svc-4d2vj [1.505580157s] Jul 1 09:10:04.603: INFO: Created: latency-svc-s2f76 Jul 1 09:10:04.611: INFO: Got endpoints: latency-svc-s2f76 [1.455194657s] Jul 1 09:10:04.669: INFO: Created: latency-svc-bxhfs Jul 1 09:10:04.678: INFO: Got endpoints: latency-svc-bxhfs [1.46815089s] Jul 1 09:10:04.710: INFO: Created: latency-svc-mn77f Jul 1 09:10:04.732: INFO: Got endpoints: latency-svc-mn77f [1.404569784s] Jul 1 09:10:04.758: INFO: Created: latency-svc-84w8h Jul 1 09:10:04.819: INFO: Got endpoints: latency-svc-84w8h [1.465916941s] Jul 1 09:10:04.861: INFO: Created: latency-svc-qktc6 Jul 1 09:10:04.882: INFO: Got endpoints: latency-svc-qktc6 [1.384992901s] Jul 1 09:10:04.999: INFO: Created: latency-svc-m8c9b Jul 1 09:10:05.016: INFO: Got endpoints: latency-svc-m8c9b [1.464020437s] Jul 1 09:10:05.045: INFO: Created: latency-svc-tz9pl Jul 1 09:10:05.069: INFO: Got endpoints: latency-svc-tz9pl [1.417481608s] Jul 1 09:10:05.184: INFO: Created: latency-svc-ccvcj Jul 1 09:10:05.232: INFO: Got endpoints: latency-svc-ccvcj [1.373279472s] Jul 1 09:10:05.279: INFO: Created: latency-svc-mwkkt Jul 1 09:10:05.370: INFO: Got endpoints: latency-svc-mwkkt [1.383471463s] Jul 1 09:10:05.375: INFO: Created: latency-svc-f6b29 Jul 1 09:10:05.382: INFO: Got endpoints: latency-svc-f6b29 [1.369491151s] Jul 1 09:10:05.430: INFO: Created: latency-svc-k72nd Jul 1 09:10:05.462: INFO: Got endpoints: latency-svc-k72nd [1.295652662s] Jul 1 09:10:05.514: INFO: Created: latency-svc-bhbxq Jul 1 09:10:05.528: INFO: Got endpoints: latency-svc-bhbxq [1.308430447s] Jul 1 09:10:05.567: INFO: Created: latency-svc-5ljnz Jul 1 09:10:05.575: INFO: Got endpoints: latency-svc-5ljnz [1.242450894s] Jul 1 09:10:05.603: INFO: Created: latency-svc-5jg5s Jul 1 09:10:05.693: INFO: Got endpoints: latency-svc-5jg5s [1.179322606s] Jul 1 09:10:05.696: INFO: Created: latency-svc-gj4lk Jul 1 09:10:05.726: INFO: Got endpoints: latency-svc-gj4lk [1.181169862s] Jul 1 09:10:05.885: INFO: Created: latency-svc-rj9lk Jul 1 09:10:05.887: INFO: Got endpoints: latency-svc-rj9lk [1.27590032s] Jul 1 09:10:05.939: INFO: Created: latency-svc-42m9g Jul 1 09:10:05.949: INFO: Got endpoints: latency-svc-42m9g [1.270967813s] Jul 1 09:10:05.975: INFO: Created: latency-svc-k2v9d Jul 1 09:10:06.070: INFO: Got endpoints: latency-svc-k2v9d [1.337399446s] Jul 1 09:10:06.083: INFO: Created: latency-svc-gxwd5 Jul 1 09:10:06.113: INFO: Got endpoints: latency-svc-gxwd5 [1.293785374s] Jul 1 09:10:06.226: INFO: Created: latency-svc-4lkps Jul 1 09:10:06.229: INFO: Got endpoints: latency-svc-4lkps [1.347002797s] Jul 1 09:10:06.229: INFO: Latencies: [51.755942ms 140.440878ms 286.721403ms 303.515581ms 431.049734ms 448.184659ms 489.787033ms 526.246644ms 615.433077ms 677.239375ms 767.839075ms 828.204288ms 912.325169ms 957.115256ms 957.270887ms 966.775504ms 1.000451988s 1.012264029s 1.046680277s 1.054808094s 1.061416022s 1.072328407s 1.073093251s 1.075030035s 1.077052494s 1.07866438s 1.087236767s 1.092328796s 1.095599498s 1.098524309s 1.104064266s 1.126642564s 1.145413296s 1.151647584s 1.152967553s 1.153918398s 1.154314889s 1.158351662s 1.174669373s 1.17488601s 1.174997139s 1.175161572s 1.176994043s 1.179322606s 1.181042688s 1.181169862s 1.185477342s 1.190716557s 1.192331329s 1.195429263s 1.197829677s 1.200858566s 1.203580882s 1.204220681s 1.207290835s 1.211120119s 1.21290233s 1.213395335s 1.213833347s 1.217142961s 1.223977971s 1.229132678s 1.229256114s 1.23340871s 1.234236831s 1.239229536s 1.242450894s 1.250643432s 1.255748518s 1.256678417s 1.266489372s 1.266869378s 1.268386695s 1.270967813s 1.27590032s 1.277122243s 1.277480838s 1.279564872s 1.279654883s 1.281796373s 1.287919385s 1.293442132s 1.293785374s 1.294569272s 1.295652662s 1.296407375s 1.298346748s 1.304352305s 1.305919935s 1.308430447s 1.313232719s 1.313789766s 1.318401728s 1.319894628s 1.32060761s 1.322627659s 1.323844086s 1.330976158s 1.331931679s 1.334563622s 1.336505016s 1.336965691s 1.337399446s 1.337599372s 1.341631162s 1.343182352s 1.345626278s 1.347002797s 1.349655742s 1.352855351s 1.353578886s 1.354185128s 1.355847069s 1.359912206s 1.360411342s 1.361021884s 1.365032723s 1.369491151s 1.373279472s 1.378001083s 1.378933074s 1.379110981s 1.383471463s 1.384992901s 1.387312935s 1.392941357s 1.3933825s 1.396883704s 1.396943582s 1.404569784s 1.415987146s 1.416599911s 1.417481608s 1.418346996s 1.420782047s 1.425406188s 1.429391782s 1.433366324s 1.434606783s 1.438677264s 1.444723438s 1.449183784s 1.452945922s 1.454181664s 1.455194657s 1.460517456s 1.464020437s 1.465916941s 1.46815089s 1.469262218s 1.474171116s 1.488380243s 1.491836869s 1.498438946s 1.503066649s 1.505580157s 1.513084098s 1.527849657s 1.527925164s 1.528108195s 1.528143341s 1.530418689s 1.547793803s 1.581206278s 1.582697802s 1.584847267s 1.607014078s 1.614739682s 1.616718374s 1.638379972s 1.660301237s 1.721698585s 1.740574967s 1.788356613s 2.154099993s 2.158379842s 2.179364529s 2.187394404s 2.201600903s 2.218735443s 2.223195592s 2.226223549s 2.241484326s 2.246206104s 2.391921844s 2.750639373s 2.970014763s 3.018202276s 3.035100287s 3.052947932s 3.054660497s 3.095200795s 3.103793382s 3.112745546s 3.114169748s 3.142193876s 3.470325732s 3.765652586s 3.8125903s 3.875022912s] Jul 1 09:10:06.230: INFO: 50 %ile: 1.336505016s Jul 1 09:10:06.230: INFO: 90 %ile: 2.223195592s Jul 1 09:10:06.230: INFO: 99 %ile: 3.8125903s Jul 1 09:10:06.230: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:10:06.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-5d9dq" for this suite. Jul 1 09:11:06.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:11:06.325: INFO: namespace: e2e-tests-svc-latency-5d9dq, resource: bindings, ignored listing per whitelist Jul 1 09:11:06.697: INFO: namespace e2e-tests-svc-latency-5d9dq deletion completed in 1m0.44196785s • [SLOW TEST:84.332 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:11:06.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-cbd69573-bb7a-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 09:11:06.873: INFO: Waiting up to 5m0s for pod "pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-tqbns" to be "success or failure" Jul 1 09:11:06.887: INFO: Pod "pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.851946ms Jul 1 09:11:08.891: INFO: Pod "pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018063266s Jul 1 09:11:10.895: INFO: Pod "pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021901282s STEP: Saw pod success Jul 1 09:11:10.895: INFO: Pod "pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:11:10.898: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 09:11:11.103: INFO: Waiting for pod pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018 to disappear Jul 1 09:11:11.121: INFO: Pod pod-configmaps-cbd74447-bb7a-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:11:11.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tqbns" for this suite. Jul 1 09:11:17.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:11:17.165: INFO: namespace: e2e-tests-configmap-tqbns, resource: bindings, ignored listing per whitelist Jul 1 09:11:17.221: INFO: namespace e2e-tests-configmap-tqbns deletion completed in 6.097500458s • [SLOW TEST:10.525 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:11:17.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 09:11:17.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-8t789' Jul 1 09:11:20.088: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 09:11:20.088: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jul 1 09:11:20.092: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jul 1 09:11:20.108: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 1 09:11:20.155: INFO: scanned /root for discovery docs: Jul 1 09:11:20.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-8t789' Jul 1 09:11:36.069: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 1 09:11:36.069: INFO: stdout: "Created e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05\nScaling up e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jul 1 09:11:36.069: INFO: stdout: "Created e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05\nScaling up e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jul 1 09:11:36.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8t789' Jul 1 09:11:36.231: INFO: stderr: "" Jul 1 09:11:36.231: INFO: stdout: "e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05-z6qtt " Jul 1 09:11:36.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05-z6qtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8t789' Jul 1 09:11:36.355: INFO: stderr: "" Jul 1 09:11:36.355: INFO: stdout: "true" Jul 1 09:11:36.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05-z6qtt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8t789' Jul 1 09:11:36.480: INFO: stderr: "" Jul 1 09:11:36.480: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jul 1 09:11:36.480: INFO: e2e-test-nginx-rc-f87787059259d4353c589f4cd627bf05-z6qtt is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jul 1 09:11:36.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8t789' Jul 1 09:11:36.644: INFO: stderr: "" Jul 1 09:11:36.644: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:11:36.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8t789" for this suite. Jul 1 09:11:44.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:11:44.740: INFO: namespace: e2e-tests-kubectl-8t789, resource: bindings, ignored listing per whitelist Jul 1 09:11:44.751: INFO: namespace e2e-tests-kubectl-8t789 deletion completed in 8.095136179s • [SLOW TEST:27.530 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:11:44.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 09:11:44.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-2xkm6" to be "success or failure" Jul 1 09:11:44.893: INFO: Pod "downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354791ms Jul 1 09:11:46.965: INFO: Pod "downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078045792s Jul 1 09:11:48.968: INFO: Pod "downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.081487735s Jul 1 09:11:50.972: INFO: Pod "downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085705867s STEP: Saw pod success Jul 1 09:11:50.972: INFO: Pod "downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:11:50.975: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018 container client-container: STEP: delete the pod Jul 1 09:11:51.011: INFO: Waiting for pod downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018 to disappear Jul 1 09:11:51.025: INFO: Pod downwardapi-volume-e2825532-bb7a-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:11:51.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2xkm6" for this suite. Jul 1 09:11:57.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:11:57.074: INFO: namespace: e2e-tests-projected-2xkm6, resource: bindings, ignored listing per whitelist Jul 1 09:11:57.141: INFO: namespace e2e-tests-projected-2xkm6 deletion completed in 6.112758598s • [SLOW TEST:12.389 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:11:57.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 09:11:57.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jul 1 09:11:57.375: INFO: stderr: "" Jul 1 09:11:57.375: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-01T07:27:42Z\", GoVersion:\"go1.11.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jul 1 09:11:57.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvmfn' Jul 1 09:11:58.283: INFO: stderr: "" Jul 1 09:11:58.283: INFO: stdout: "replicationcontroller/redis-master created\n" Jul 1 09:11:58.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvmfn' Jul 1 09:11:59.710: INFO: stderr: "" Jul 1 09:11:59.711: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jul 1 09:12:00.715: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:12:00.715: INFO: Found 0 / 1 Jul 1 09:12:01.716: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:12:01.716: INFO: Found 0 / 1 Jul 1 09:12:02.715: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:12:02.715: INFO: Found 0 / 1 Jul 1 09:12:03.715: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:12:03.716: INFO: Found 1 / 1 Jul 1 09:12:03.716: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 09:12:03.719: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:12:03.719: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 09:12:03.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-ht82q --namespace=e2e-tests-kubectl-qvmfn' Jul 1 09:12:03.831: INFO: stderr: "" Jul 1 09:12:03.831: INFO: stdout: "Name: redis-master-ht82q\nNamespace: e2e-tests-kubectl-qvmfn\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Wed, 01 Jul 2020 09:11:59 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.151\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://07c3c780d35b5ce2b3dc92403c7a590ba408e7b17a7da029b48488f8571f1aac\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 01 Jul 2020 09:12:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-42v6w (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-42v6w:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-42v6w\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-qvmfn/redis-master-ht82q to hunter-worker2\n Normal Pulled 3s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Jul 1 09:12:03.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-qvmfn' Jul 1 09:12:03.962: INFO: stderr: "" Jul 1 09:12:03.962: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-qvmfn\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-ht82q\n" Jul 1 09:12:03.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-qvmfn' Jul 1 09:12:04.072: INFO: stderr: "" Jul 1 09:12:04.072: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-qvmfn\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.228.12\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.151:6379\nSession Affinity: None\nEvents: \n" Jul 1 09:12:04.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jul 1 09:12:04.225: INFO: stderr: "" Jul 1 09:12:04.225: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 01 Jul 2020 09:12:03 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 01 Jul 2020 09:12:03 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 01 Jul 2020 09:12:03 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 01 Jul 2020 09:12:03 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 107d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 107d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 1 09:12:04.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-qvmfn' Jul 1 09:12:04.351: INFO: stderr: "" Jul 1 09:12:04.351: INFO: stdout: "Name: e2e-tests-kubectl-qvmfn\nLabels: e2e-framework=kubectl\n e2e-run=e79ec815-bb6c-11ea-a133-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:12:04.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qvmfn" for this suite. Jul 1 09:12:26.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:12:26.439: INFO: namespace: e2e-tests-kubectl-qvmfn, resource: bindings, ignored listing per whitelist Jul 1 09:12:26.460: INFO: namespace e2e-tests-kubectl-qvmfn deletion completed in 22.105519046s • [SLOW TEST:29.319 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:12:26.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-fb5a1ff1-bb7a-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:12:32.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m6cxt" for this suite. Jul 1 09:12:54.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:12:54.768: INFO: namespace: e2e-tests-configmap-m6cxt, resource: bindings, ignored listing per whitelist Jul 1 09:12:54.777: INFO: namespace e2e-tests-configmap-m6cxt deletion completed in 22.12682003s • [SLOW TEST:28.317 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:12:54.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-0c3dc2dc-bb7b-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 09:12:54.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-tn2m7" to be "success or failure" Jul 1 09:12:54.983: INFO: Pod "pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.392574ms Jul 1 09:12:56.987: INFO: Pod "pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047125529s Jul 1 09:12:58.990: INFO: Pod "pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.050235602s Jul 1 09:13:00.994: INFO: Pod "pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054213477s STEP: Saw pod success Jul 1 09:13:00.994: INFO: Pod "pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:13:00.997: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jul 1 09:13:01.040: INFO: Waiting for pod pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018 to disappear Jul 1 09:13:01.046: INFO: Pod pod-projected-secrets-0c4244b7-bb7b-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:13:01.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tn2m7" for this suite. Jul 1 09:13:07.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:13:07.133: INFO: namespace: e2e-tests-projected-tn2m7, resource: bindings, ignored listing per whitelist Jul 1 09:13:07.166: INFO: namespace e2e-tests-projected-tn2m7 deletion completed in 6.116387106s • [SLOW TEST:12.389 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:13:07.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-tjj74/configmap-test-13a4a3da-bb7b-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 09:13:07.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-tjj74" to be "success or failure" Jul 1 09:13:07.466: INFO: Pod "pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.163333ms Jul 1 09:13:09.470: INFO: Pod "pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01416218s Jul 1 09:13:11.474: INFO: Pod "pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01839554s STEP: Saw pod success Jul 1 09:13:11.475: INFO: Pod "pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:13:11.477: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018 container env-test: STEP: delete the pod Jul 1 09:13:11.535: INFO: Waiting for pod pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018 to disappear Jul 1 09:13:11.574: INFO: Pod pod-configmaps-13a96797-bb7b-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:13:11.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tjj74" for this suite. Jul 1 09:13:17.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:13:17.651: INFO: namespace: e2e-tests-configmap-tjj74, resource: bindings, ignored listing per whitelist Jul 1 09:13:17.668: INFO: namespace e2e-tests-configmap-tjj74 deletion completed in 6.090669964s • [SLOW TEST:10.502 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:13:17.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r6jw6 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 09:13:17.796: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 09:13:43.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.56:8080/dial?request=hostName&protocol=http&host=10.244.2.153&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-r6jw6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:13:43.959: INFO: >>> kubeConfig: /root/.kube/config I0701 09:13:43.995273 6 log.go:172] (0xc000a0a4d0) (0xc000370780) Create stream I0701 09:13:43.995307 6 log.go:172] (0xc000a0a4d0) (0xc000370780) Stream added, broadcasting: 1 I0701 09:13:43.997662 6 log.go:172] (0xc000a0a4d0) Reply frame received for 1 I0701 09:13:43.997710 6 log.go:172] (0xc000a0a4d0) (0xc001971b80) Create stream I0701 09:13:43.997744 6 log.go:172] (0xc000a0a4d0) (0xc001971b80) Stream added, broadcasting: 3 I0701 09:13:43.998625 6 log.go:172] (0xc000a0a4d0) Reply frame received for 3 I0701 09:13:43.998657 6 log.go:172] (0xc000a0a4d0) (0xc000a24c80) Create stream I0701 09:13:43.998669 6 log.go:172] (0xc000a0a4d0) (0xc000a24c80) Stream added, broadcasting: 5 I0701 09:13:43.999551 6 log.go:172] (0xc000a0a4d0) Reply frame received for 5 I0701 09:13:44.086263 6 log.go:172] (0xc000a0a4d0) Data frame received for 3 I0701 09:13:44.086292 6 log.go:172] (0xc001971b80) (3) Data frame handling I0701 09:13:44.086308 6 log.go:172] (0xc001971b80) (3) Data frame sent I0701 09:13:44.086872 6 log.go:172] (0xc000a0a4d0) Data frame received for 5 I0701 09:13:44.086906 6 log.go:172] (0xc000a24c80) (5) Data frame handling I0701 09:13:44.086944 6 log.go:172] (0xc000a0a4d0) Data frame received for 3 I0701 09:13:44.086987 6 log.go:172] (0xc001971b80) (3) Data frame handling I0701 09:13:44.088415 6 log.go:172] (0xc000a0a4d0) Data frame received for 1 I0701 09:13:44.088438 6 log.go:172] (0xc000370780) (1) Data frame handling I0701 09:13:44.088456 6 log.go:172] (0xc000370780) (1) Data frame sent I0701 09:13:44.088476 6 log.go:172] (0xc000a0a4d0) (0xc000370780) Stream removed, broadcasting: 1 I0701 09:13:44.088500 6 log.go:172] (0xc000a0a4d0) Go away received I0701 09:13:44.088632 6 log.go:172] (0xc000a0a4d0) (0xc000370780) Stream removed, broadcasting: 1 I0701 09:13:44.088650 6 log.go:172] (0xc000a0a4d0) (0xc001971b80) Stream removed, broadcasting: 3 I0701 09:13:44.088658 6 log.go:172] (0xc000a0a4d0) (0xc000a24c80) Stream removed, broadcasting: 5 Jul 1 09:13:44.088: INFO: Waiting for endpoints: map[] Jul 1 09:13:44.091: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.56:8080/dial?request=hostName&protocol=http&host=10.244.1.55&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-r6jw6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:13:44.091: INFO: >>> kubeConfig: /root/.kube/config I0701 09:13:44.123205 6 log.go:172] (0xc0000ebef0) (0xc001d89ae0) Create stream I0701 09:13:44.123235 6 log.go:172] (0xc0000ebef0) (0xc001d89ae0) Stream added, broadcasting: 1 I0701 09:13:44.125483 6 log.go:172] (0xc0000ebef0) Reply frame received for 1 I0701 09:13:44.125531 6 log.go:172] (0xc0000ebef0) (0xc002199360) Create stream I0701 09:13:44.125548 6 log.go:172] (0xc0000ebef0) (0xc002199360) Stream added, broadcasting: 3 I0701 09:13:44.126548 6 log.go:172] (0xc0000ebef0) Reply frame received for 3 I0701 09:13:44.126621 6 log.go:172] (0xc0000ebef0) (0xc002199400) Create stream I0701 09:13:44.126650 6 log.go:172] (0xc0000ebef0) (0xc002199400) Stream added, broadcasting: 5 I0701 09:13:44.127493 6 log.go:172] (0xc0000ebef0) Reply frame received for 5 I0701 09:13:44.198471 6 log.go:172] (0xc0000ebef0) Data frame received for 3 I0701 09:13:44.198508 6 log.go:172] (0xc002199360) (3) Data frame handling I0701 09:13:44.198529 6 log.go:172] (0xc002199360) (3) Data frame sent I0701 09:13:44.199061 6 log.go:172] (0xc0000ebef0) Data frame received for 3 I0701 09:13:44.199088 6 log.go:172] (0xc002199360) (3) Data frame handling I0701 09:13:44.199124 6 log.go:172] (0xc0000ebef0) Data frame received for 5 I0701 09:13:44.199173 6 log.go:172] (0xc002199400) (5) Data frame handling I0701 09:13:44.200415 6 log.go:172] (0xc0000ebef0) Data frame received for 1 I0701 09:13:44.200440 6 log.go:172] (0xc001d89ae0) (1) Data frame handling I0701 09:13:44.200458 6 log.go:172] (0xc001d89ae0) (1) Data frame sent I0701 09:13:44.200479 6 log.go:172] (0xc0000ebef0) (0xc001d89ae0) Stream removed, broadcasting: 1 I0701 09:13:44.200494 6 log.go:172] (0xc0000ebef0) Go away received I0701 09:13:44.200631 6 log.go:172] (0xc0000ebef0) (0xc001d89ae0) Stream removed, broadcasting: 1 I0701 09:13:44.200654 6 log.go:172] (0xc0000ebef0) (0xc002199360) Stream removed, broadcasting: 3 I0701 09:13:44.200676 6 log.go:172] (0xc0000ebef0) (0xc002199400) Stream removed, broadcasting: 5 Jul 1 09:13:44.200: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:13:44.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r6jw6" for this suite. Jul 1 09:14:08.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:14:08.296: INFO: namespace: e2e-tests-pod-network-test-r6jw6, resource: bindings, ignored listing per whitelist Jul 1 09:14:08.298: INFO: namespace e2e-tests-pod-network-test-r6jw6 deletion completed in 24.093342512s • [SLOW TEST:50.630 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:14:08.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-38108434-bb7b-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-38108434-bb7b-11ea-a133-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:14:14.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cswfz" for this suite. Jul 1 09:14:36.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:14:36.512: INFO: namespace: e2e-tests-projected-cswfz, resource: bindings, ignored listing per whitelist Jul 1 09:14:36.568: INFO: namespace e2e-tests-projected-cswfz deletion completed in 22.088501033s • [SLOW TEST:28.270 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:14:36.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 1 09:14:36.702: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 09:14:36.723: INFO: Waiting for terminating namespaces to be deleted... Jul 1 09:14:36.726: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 1 09:14:36.732: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jul 1 09:14:36.732: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 09:14:36.732: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 09:14:36.732: INFO: Container kindnet-cni ready: true, restart count 0 Jul 1 09:14:36.732: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jul 1 09:14:36.732: INFO: Container coredns ready: true, restart count 0 Jul 1 09:14:36.732: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 1 09:14:36.740: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 09:14:36.740: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 09:14:36.740: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jul 1 09:14:36.740: INFO: Container kindnet-cni ready: true, restart count 0 Jul 1 09:14:36.740: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jul 1 09:14:36.740: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161d9618e1090650], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:14:37.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-kvwjp" for this suite. Jul 1 09:14:43.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:14:43.795: INFO: namespace: e2e-tests-sched-pred-kvwjp, resource: bindings, ignored listing per whitelist Jul 1 09:14:43.851: INFO: namespace e2e-tests-sched-pred-kvwjp deletion completed in 6.086751707s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.282 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:14:43.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jul 1 09:14:44.515: INFO: created pod pod-service-account-defaultsa Jul 1 09:14:44.515: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 1 09:14:44.529: INFO: created pod pod-service-account-mountsa Jul 1 09:14:44.529: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 1 09:14:44.536: INFO: created pod pod-service-account-nomountsa Jul 1 09:14:44.536: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 1 09:14:44.608: INFO: created pod pod-service-account-defaultsa-mountspec Jul 1 09:14:44.608: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 1 09:14:44.630: INFO: created pod pod-service-account-mountsa-mountspec Jul 1 09:14:44.630: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 1 09:14:44.674: INFO: created pod pod-service-account-nomountsa-mountspec Jul 1 09:14:44.674: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 1 09:14:44.703: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 1 09:14:44.703: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 1 09:14:44.746: INFO: created pod pod-service-account-mountsa-nomountspec Jul 1 09:14:44.746: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 1 09:14:44.775: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 1 09:14:44.775: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:14:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-r5fn8" for this suite. Jul 1 09:15:18.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:15:19.039: INFO: namespace: e2e-tests-svcaccounts-r5fn8, resource: bindings, ignored listing per whitelist Jul 1 09:15:19.064: INFO: namespace e2e-tests-svcaccounts-r5fn8 deletion completed in 34.150364516s • [SLOW TEST:35.213 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:15:19.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-623eeaaf-bb7b-11ea-a133-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-623eeaaf-bb7b-11ea-a133-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:15:25.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fjvgt" for this suite. Jul 1 09:15:47.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:15:47.695: INFO: namespace: e2e-tests-configmap-fjvgt, resource: bindings, ignored listing per whitelist Jul 1 09:15:47.786: INFO: namespace e2e-tests-configmap-fjvgt deletion completed in 22.172353596s • [SLOW TEST:28.722 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:15:47.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7356b17c-bb7b-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 09:15:47.916: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-fz6cm" to be "success or failure" Jul 1 09:15:47.932: INFO: Pod "pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.329215ms Jul 1 09:15:50.064: INFO: Pod "pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147838693s Jul 1 09:15:52.068: INFO: Pod "pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1524547s STEP: Saw pod success Jul 1 09:15:52.068: INFO: Pod "pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:15:52.071: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jul 1 09:15:52.101: INFO: Waiting for pod pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018 to disappear Jul 1 09:15:52.116: INFO: Pod pod-projected-configmaps-735c4dde-bb7b-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:15:52.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fz6cm" for this suite. Jul 1 09:15:58.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:15:58.212: INFO: namespace: e2e-tests-projected-fz6cm, resource: bindings, ignored listing per whitelist Jul 1 09:15:58.237: INFO: namespace e2e-tests-projected-fz6cm deletion completed in 6.117105915s • [SLOW TEST:10.451 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:15:58.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-799cdb0e-bb7b-11ea-a133-0242ac110018 STEP: Creating a pod to test consume configMaps Jul 1 09:15:58.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018" in namespace "e2e-tests-configmap-gprc8" to be "success or failure" Jul 1 09:15:58.525: INFO: Pod "pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 100.870111ms Jul 1 09:16:00.591: INFO: Pod "pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166788515s Jul 1 09:16:03.478: INFO: Pod "pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.05334696s Jul 1 09:16:05.482: INFO: Pod "pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.058047989s STEP: Saw pod success Jul 1 09:16:05.482: INFO: Pod "pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:16:05.485: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018 container configmap-volume-test: STEP: delete the pod Jul 1 09:16:05.551: INFO: Waiting for pod pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018 to disappear Jul 1 09:16:05.603: INFO: Pod pod-configmaps-79a0b464-bb7b-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:16:05.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gprc8" for this suite. Jul 1 09:16:11.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:16:11.689: INFO: namespace: e2e-tests-configmap-gprc8, resource: bindings, ignored listing per whitelist Jul 1 09:16:11.726: INFO: namespace e2e-tests-configmap-gprc8 deletion completed in 6.118687792s • [SLOW TEST:13.488 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:16:11.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-81b6d492-bb7b-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 09:16:12.062: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018" in namespace "e2e-tests-projected-7m9mg" to be "success or failure" Jul 1 09:16:12.071: INFO: Pod "pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.655277ms Jul 1 09:16:14.107: INFO: Pod "pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045811083s Jul 1 09:16:16.232: INFO: Pod "pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.170763101s Jul 1 09:16:18.237: INFO: Pod "pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.175617759s STEP: Saw pod success Jul 1 09:16:18.237: INFO: Pod "pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:16:18.240: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jul 1 09:16:18.293: INFO: Waiting for pod pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018 to disappear Jul 1 09:16:18.302: INFO: Pod pod-projected-secrets-81b93f80-bb7b-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:16:18.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7m9mg" for this suite. Jul 1 09:16:24.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:16:24.332: INFO: namespace: e2e-tests-projected-7m9mg, resource: bindings, ignored listing per whitelist Jul 1 09:16:24.394: INFO: namespace e2e-tests-projected-7m9mg deletion completed in 6.087491306s • [SLOW TEST:12.668 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:16:24.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 09:16:24.580: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.869703ms) Jul 1 09:16:24.583: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.551022ms) Jul 1 09:16:24.587: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.041079ms) Jul 1 09:16:24.591: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.806321ms) Jul 1 09:16:24.594: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.351625ms) Jul 1 09:16:24.598: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.094376ms) Jul 1 09:16:24.600: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.700905ms) Jul 1 09:16:24.604: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.180558ms) Jul 1 09:16:24.606: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.778222ms) Jul 1 09:16:24.609: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.937465ms) Jul 1 09:16:24.612: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.687241ms) Jul 1 09:16:24.615: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.195881ms) Jul 1 09:16:24.619: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.320359ms) Jul 1 09:16:24.622: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.456529ms) Jul 1 09:16:24.626: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.434509ms) Jul 1 09:16:24.628: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.714609ms) Jul 1 09:16:24.632: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.227033ms) Jul 1 09:16:24.635: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.414443ms) Jul 1 09:16:24.639: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.512357ms) Jul 1 09:16:24.642: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.797518ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:16:24.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-c5zg7" for this suite. Jul 1 09:16:30.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:16:30.755: INFO: namespace: e2e-tests-proxy-c5zg7, resource: bindings, ignored listing per whitelist Jul 1 09:16:30.757: INFO: namespace e2e-tests-proxy-c5zg7 deletion completed in 6.110912789s • [SLOW TEST:6.363 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:16:30.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-gxj6 STEP: Creating a pod to test atomic-volume-subpath Jul 1 09:16:30.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gxj6" in namespace "e2e-tests-subpath-sg7qt" to be "success or failure" Jul 1 09:16:30.926: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.998872ms Jul 1 09:16:32.931: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03733969s Jul 1 09:16:34.963: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069573641s Jul 1 09:16:36.979: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08589972s Jul 1 09:16:38.983: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 8.09025284s Jul 1 09:16:40.988: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 10.094722216s Jul 1 09:16:42.992: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 12.09909949s Jul 1 09:16:44.997: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 14.103777727s Jul 1 09:16:47.002: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 16.108477507s Jul 1 09:16:49.006: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 18.113286013s Jul 1 09:16:51.011: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 20.117581798s Jul 1 09:16:53.015: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 22.121752736s Jul 1 09:16:55.019: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Running", Reason="", readiness=false. Elapsed: 24.125501555s Jul 1 09:16:57.113: INFO: Pod "pod-subpath-test-configmap-gxj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.220279962s STEP: Saw pod success Jul 1 09:16:57.114: INFO: Pod "pod-subpath-test-configmap-gxj6" satisfied condition "success or failure" Jul 1 09:16:57.117: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-gxj6 container test-container-subpath-configmap-gxj6: STEP: delete the pod Jul 1 09:16:57.295: INFO: Waiting for pod pod-subpath-test-configmap-gxj6 to disappear Jul 1 09:16:57.334: INFO: Pod pod-subpath-test-configmap-gxj6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-gxj6 Jul 1 09:16:57.334: INFO: Deleting pod "pod-subpath-test-configmap-gxj6" in namespace "e2e-tests-subpath-sg7qt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:16:57.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-sg7qt" for this suite. Jul 1 09:17:03.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:17:03.575: INFO: namespace: e2e-tests-subpath-sg7qt, resource: bindings, ignored listing per whitelist Jul 1 09:17:03.632: INFO: namespace e2e-tests-subpath-sg7qt deletion completed in 6.29060419s • [SLOW TEST:32.874 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:17:03.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 1 09:17:03.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:04.196: INFO: stderr: "" Jul 1 09:17:04.196: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 09:17:04.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:04.350: INFO: stderr: "" Jul 1 09:17:04.350: INFO: stdout: "update-demo-nautilus-l577n update-demo-nautilus-pbbt4 " Jul 1 09:17:04.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l577n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:04.504: INFO: stderr: "" Jul 1 09:17:04.504: INFO: stdout: "" Jul 1 09:17:04.504: INFO: update-demo-nautilus-l577n is created but not running Jul 1 09:17:09.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:09.709: INFO: stderr: "" Jul 1 09:17:09.709: INFO: stdout: "update-demo-nautilus-l577n update-demo-nautilus-pbbt4 " Jul 1 09:17:09.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l577n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:09.820: INFO: stderr: "" Jul 1 09:17:09.820: INFO: stdout: "" Jul 1 09:17:09.820: INFO: update-demo-nautilus-l577n is created but not running Jul 1 09:17:14.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:14.925: INFO: stderr: "" Jul 1 09:17:14.925: INFO: stdout: "update-demo-nautilus-l577n update-demo-nautilus-pbbt4 " Jul 1 09:17:14.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l577n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:15.020: INFO: stderr: "" Jul 1 09:17:15.020: INFO: stdout: "true" Jul 1 09:17:15.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l577n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:15.114: INFO: stderr: "" Jul 1 09:17:15.114: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 09:17:15.114: INFO: validating pod update-demo-nautilus-l577n Jul 1 09:17:15.118: INFO: got data: { "image": "nautilus.jpg" } Jul 1 09:17:15.118: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 09:17:15.118: INFO: update-demo-nautilus-l577n is verified up and running Jul 1 09:17:15.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbbt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:15.543: INFO: stderr: "" Jul 1 09:17:15.543: INFO: stdout: "true" Jul 1 09:17:15.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbbt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:15.644: INFO: stderr: "" Jul 1 09:17:15.645: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 09:17:15.645: INFO: validating pod update-demo-nautilus-pbbt4 Jul 1 09:17:15.649: INFO: got data: { "image": "nautilus.jpg" } Jul 1 09:17:15.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 09:17:15.649: INFO: update-demo-nautilus-pbbt4 is verified up and running STEP: scaling down the replication controller Jul 1 09:17:15.651: INFO: scanned /root for discovery docs: Jul 1 09:17:15.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:16.891: INFO: stderr: "" Jul 1 09:17:16.891: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 09:17:16.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:16.997: INFO: stderr: "" Jul 1 09:17:16.998: INFO: stdout: "update-demo-nautilus-l577n update-demo-nautilus-pbbt4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 09:17:21.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:22.113: INFO: stderr: "" Jul 1 09:17:22.113: INFO: stdout: "update-demo-nautilus-pbbt4 " Jul 1 09:17:22.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbbt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:22.216: INFO: stderr: "" Jul 1 09:17:22.216: INFO: stdout: "true" Jul 1 09:17:22.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbbt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:22.310: INFO: stderr: "" Jul 1 09:17:22.310: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 09:17:22.310: INFO: validating pod update-demo-nautilus-pbbt4 Jul 1 09:17:22.313: INFO: got data: { "image": "nautilus.jpg" } Jul 1 09:17:22.313: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 09:17:22.313: INFO: update-demo-nautilus-pbbt4 is verified up and running STEP: scaling up the replication controller Jul 1 09:17:22.316: INFO: scanned /root for discovery docs: Jul 1 09:17:22.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:23.492: INFO: stderr: "" Jul 1 09:17:23.492: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 09:17:23.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:23.591: INFO: stderr: "" Jul 1 09:17:23.591: INFO: stdout: "update-demo-nautilus-gkq4b update-demo-nautilus-pbbt4 " Jul 1 09:17:23.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkq4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:23.687: INFO: stderr: "" Jul 1 09:17:23.687: INFO: stdout: "" Jul 1 09:17:23.687: INFO: update-demo-nautilus-gkq4b is created but not running Jul 1 09:17:28.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:28.881: INFO: stderr: "" Jul 1 09:17:28.881: INFO: stdout: "update-demo-nautilus-gkq4b update-demo-nautilus-pbbt4 " Jul 1 09:17:28.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkq4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:28.973: INFO: stderr: "" Jul 1 09:17:28.973: INFO: stdout: "true" Jul 1 09:17:28.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkq4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:29.070: INFO: stderr: "" Jul 1 09:17:29.070: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 09:17:29.070: INFO: validating pod update-demo-nautilus-gkq4b Jul 1 09:17:29.073: INFO: got data: { "image": "nautilus.jpg" } Jul 1 09:17:29.073: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 09:17:29.073: INFO: update-demo-nautilus-gkq4b is verified up and running Jul 1 09:17:29.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbbt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:29.214: INFO: stderr: "" Jul 1 09:17:29.214: INFO: stdout: "true" Jul 1 09:17:29.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbbt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:29.313: INFO: stderr: "" Jul 1 09:17:29.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 09:17:29.313: INFO: validating pod update-demo-nautilus-pbbt4 Jul 1 09:17:29.316: INFO: got data: { "image": "nautilus.jpg" } Jul 1 09:17:29.316: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 09:17:29.316: INFO: update-demo-nautilus-pbbt4 is verified up and running STEP: using delete to clean up resources Jul 1 09:17:29.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:29.453: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 09:17:29.453: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 09:17:29.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jwwms' Jul 1 09:17:29.647: INFO: stderr: "No resources found.\n" Jul 1 09:17:29.648: INFO: stdout: "" Jul 1 09:17:29.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-jwwms -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 09:17:29.763: INFO: stderr: "" Jul 1 09:17:29.763: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:17:29.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jwwms" for this suite. Jul 1 09:17:53.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:17:53.835: INFO: namespace: e2e-tests-kubectl-jwwms, resource: bindings, ignored listing per whitelist Jul 1 09:17:53.879: INFO: namespace e2e-tests-kubectl-jwwms deletion completed in 24.112724545s • [SLOW TEST:50.247 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:17:53.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jul 1 09:17:53.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-q5kmm run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 1 09:17:57.168: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0701 09:17:57.081988 3636 log.go:172] (0xc0001386e0) (0xc0005f4140) Create stream\nI0701 09:17:57.082051 3636 log.go:172] (0xc0001386e0) (0xc0005f4140) Stream added, broadcasting: 1\nI0701 09:17:57.084514 3636 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0701 09:17:57.084590 3636 log.go:172] (0xc0001386e0) (0xc0008f2000) Create stream\nI0701 09:17:57.084603 3636 log.go:172] (0xc0001386e0) (0xc0008f2000) Stream added, broadcasting: 3\nI0701 09:17:57.086028 3636 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0701 09:17:57.086071 3636 log.go:172] (0xc0001386e0) (0xc0005f41e0) Create stream\nI0701 09:17:57.086100 3636 log.go:172] (0xc0001386e0) (0xc0005f41e0) Stream added, broadcasting: 5\nI0701 09:17:57.087239 3636 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0701 09:17:57.087274 3636 log.go:172] (0xc0001386e0) (0xc0005f4280) Create stream\nI0701 09:17:57.087285 3636 log.go:172] (0xc0001386e0) (0xc0005f4280) Stream added, broadcasting: 7\nI0701 09:17:57.088323 3636 log.go:172] (0xc0001386e0) Reply frame received for 7\nI0701 09:17:57.088527 3636 log.go:172] (0xc0008f2000) (3) Writing data frame\nI0701 09:17:57.088655 3636 log.go:172] (0xc0008f2000) (3) Writing data frame\nI0701 09:17:57.089708 3636 log.go:172] (0xc0001386e0) Data frame received for 5\nI0701 09:17:57.089729 3636 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0701 09:17:57.089743 3636 log.go:172] (0xc0005f41e0) (5) Data frame sent\nI0701 09:17:57.090168 3636 log.go:172] (0xc0001386e0) Data frame received for 5\nI0701 09:17:57.090183 3636 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0701 09:17:57.090196 3636 log.go:172] (0xc0005f41e0) (5) Data frame sent\nI0701 09:17:57.144202 3636 log.go:172] (0xc0001386e0) Data frame received for 5\nI0701 09:17:57.144239 3636 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0701 09:17:57.144256 3636 log.go:172] (0xc0001386e0) Data frame received for 7\nI0701 09:17:57.144260 3636 log.go:172] (0xc0005f4280) (7) Data frame handling\nI0701 09:17:57.144383 3636 log.go:172] (0xc0001386e0) Data frame received for 1\nI0701 09:17:57.144397 3636 log.go:172] (0xc0005f4140) (1) Data frame handling\nI0701 09:17:57.144413 3636 log.go:172] (0xc0005f4140) (1) Data frame sent\nI0701 09:17:57.144424 3636 log.go:172] (0xc0001386e0) (0xc0005f4140) Stream removed, broadcasting: 1\nI0701 09:17:57.144496 3636 log.go:172] (0xc0001386e0) (0xc0005f4140) Stream removed, broadcasting: 1\nI0701 09:17:57.144513 3636 log.go:172] (0xc0001386e0) (0xc0008f2000) Stream removed, broadcasting: 3\nI0701 09:17:57.144521 3636 log.go:172] (0xc0001386e0) (0xc0005f41e0) Stream removed, broadcasting: 5\nI0701 09:17:57.144677 3636 log.go:172] (0xc0001386e0) (0xc0005f4280) Stream removed, broadcasting: 7\nI0701 09:17:57.144824 3636 log.go:172] (0xc0001386e0) Go away received\n" Jul 1 09:17:57.168: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:17:59.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q5kmm" for this suite. Jul 1 09:18:05.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:18:05.329: INFO: namespace: e2e-tests-kubectl-q5kmm, resource: bindings, ignored listing per whitelist Jul 1 09:18:05.329: INFO: namespace e2e-tests-kubectl-q5kmm deletion completed in 6.149597547s • [SLOW TEST:11.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:18:05.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:18:05.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-58jr5" for this suite. Jul 1 09:18:11.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:18:11.602: INFO: namespace: e2e-tests-kubelet-test-58jr5, resource: bindings, ignored listing per whitelist Jul 1 09:18:11.651: INFO: namespace e2e-tests-kubelet-test-58jr5 deletion completed in 6.087044958s • [SLOW TEST:6.321 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:18:11.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 09:18:11.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qtvh7' Jul 1 09:18:11.922: INFO: stderr: "" Jul 1 09:18:11.922: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 1 09:18:21.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qtvh7 -o json' Jul 1 09:18:22.064: INFO: stderr: "" Jul 1 09:18:22.064: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-01T09:18:11Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-qtvh7\",\n \"resourceVersion\": \"18840770\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-qtvh7/pods/e2e-test-nginx-pod\",\n \"uid\": \"c9317a29-bb7b-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-nc65d\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-nc65d\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-nc65d\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T09:18:11Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T09:18:17Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T09:18:17Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T09:18:11Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://494949f0059bd6a3c4a40d30fa7d8b4a0358f32d21faba3c9dabe54e95df4035\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-01T09:18:15Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.66\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-01T09:18:11Z\"\n }\n}\n" STEP: replace the image in the pod Jul 1 09:18:22.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-qtvh7' Jul 1 09:18:22.342: INFO: stderr: "" Jul 1 09:18:22.342: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 1 09:18:22.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qtvh7' Jul 1 09:18:31.412: INFO: stderr: "" Jul 1 09:18:31.412: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:18:31.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qtvh7" for this suite. Jul 1 09:18:37.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:18:37.545: INFO: namespace: e2e-tests-kubectl-qtvh7, resource: bindings, ignored listing per whitelist Jul 1 09:18:37.575: INFO: namespace e2e-tests-kubectl-qtvh7 deletion completed in 6.141756133s • [SLOW TEST:25.924 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:18:37.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jul 1 09:18:37.739: INFO: Waiting up to 5m0s for pod "client-containers-d89496c0-bb7b-11ea-a133-0242ac110018" in namespace "e2e-tests-containers-w967w" to be "success or failure" Jul 1 09:18:37.743: INFO: Pod "client-containers-d89496c0-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.48042ms Jul 1 09:18:39.748: INFO: Pod "client-containers-d89496c0-bb7b-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008001079s Jul 1 09:18:41.751: INFO: Pod "client-containers-d89496c0-bb7b-11ea-a133-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.011741855s Jul 1 09:18:43.755: INFO: Pod "client-containers-d89496c0-bb7b-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015491984s STEP: Saw pod success Jul 1 09:18:43.755: INFO: Pod "client-containers-d89496c0-bb7b-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:18:43.758: INFO: Trying to get logs from node hunter-worker2 pod client-containers-d89496c0-bb7b-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 09:18:43.790: INFO: Waiting for pod client-containers-d89496c0-bb7b-11ea-a133-0242ac110018 to disappear Jul 1 09:18:43.806: INFO: Pod client-containers-d89496c0-bb7b-11ea-a133-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:18:43.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-w967w" for this suite. Jul 1 09:18:49.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:18:49.875: INFO: namespace: e2e-tests-containers-w967w, resource: bindings, ignored listing per whitelist Jul 1 09:18:49.927: INFO: namespace e2e-tests-containers-w967w deletion completed in 6.117569615s • [SLOW TEST:12.352 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:18:49.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 1 09:18:50.498: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840887,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 09:18:50.499: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840887,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 1 09:19:00.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840907,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 1 09:19:00.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840907,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 1 09:19:10.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840927,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 09:19:10.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840927,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 1 09:19:20.525: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840947,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 09:19:20.525: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-a,UID:dffd30f7-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840947,Generation:0,CreationTimestamp:2020-07-01 09:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 1 09:19:30.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-b,UID:f80e4ddc-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840967,Generation:0,CreationTimestamp:2020-07-01 09:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 09:19:30.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-b,UID:f80e4ddc-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840967,Generation:0,CreationTimestamp:2020-07-01 09:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 1 09:19:40.542: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-b,UID:f80e4ddc-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840987,Generation:0,CreationTimestamp:2020-07-01 09:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 09:19:40.542: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-jjcc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjcc4/configmaps/e2e-watch-test-configmap-b,UID:f80e4ddc-bb7b-11ea-99e8-0242ac110002,ResourceVersion:18840987,Generation:0,CreationTimestamp:2020-07-01 09:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:19:50.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jjcc4" for this suite. Jul 1 09:19:56.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:19:56.573: INFO: namespace: e2e-tests-watch-jjcc4, resource: bindings, ignored listing per whitelist Jul 1 09:19:56.641: INFO: namespace e2e-tests-watch-jjcc4 deletion completed in 6.092304078s • [SLOW TEST:66.713 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:19:56.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-07ad373f-bb7c-11ea-a133-0242ac110018 STEP: Creating a pod to test consume secrets Jul 1 09:19:56.770: INFO: Waiting up to 5m0s for pod "pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018" in namespace "e2e-tests-secrets-xcqhq" to be "success or failure" Jul 1 09:19:56.795: INFO: Pod "pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.984182ms Jul 1 09:19:58.888: INFO: Pod "pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118300496s Jul 1 09:20:00.893: INFO: Pod "pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123005309s STEP: Saw pod success Jul 1 09:20:00.893: INFO: Pod "pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:20:00.896: INFO: Trying to get logs from node hunter-worker pod pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018 container secret-volume-test: STEP: delete the pod Jul 1 09:20:00.979: INFO: Waiting for pod pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018 to disappear Jul 1 09:20:01.029: INFO: Pod pod-secrets-07b161a0-bb7c-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:20:01.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xcqhq" for this suite. Jul 1 09:20:07.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:20:07.219: INFO: namespace: e2e-tests-secrets-xcqhq, resource: bindings, ignored listing per whitelist Jul 1 09:20:07.246: INFO: namespace e2e-tests-secrets-xcqhq deletion completed in 6.212445423s • [SLOW TEST:10.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:20:07.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 1 09:20:07.379: INFO: Waiting up to 5m0s for pod "pod-0e02d7ba-bb7c-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-mgxlg" to be "success or failure" Jul 1 09:20:07.383: INFO: Pod "pod-0e02d7ba-bb7c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384896ms Jul 1 09:20:09.386: INFO: Pod "pod-0e02d7ba-bb7c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006988517s Jul 1 09:20:11.389: INFO: Pod "pod-0e02d7ba-bb7c-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010350047s STEP: Saw pod success Jul 1 09:20:11.389: INFO: Pod "pod-0e02d7ba-bb7c-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:20:11.391: INFO: Trying to get logs from node hunter-worker2 pod pod-0e02d7ba-bb7c-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 09:20:11.434: INFO: Waiting for pod pod-0e02d7ba-bb7c-11ea-a133-0242ac110018 to disappear Jul 1 09:20:11.444: INFO: Pod pod-0e02d7ba-bb7c-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:20:11.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mgxlg" for this suite. Jul 1 09:20:19.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:20:19.605: INFO: namespace: e2e-tests-emptydir-mgxlg, resource: bindings, ignored listing per whitelist Jul 1 09:20:19.629: INFO: namespace e2e-tests-emptydir-mgxlg deletion completed in 8.181982239s • [SLOW TEST:12.383 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:20:19.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jul 1 09:20:19.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q85wc' Jul 1 09:20:20.015: INFO: stderr: "" Jul 1 09:20:20.015: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jul 1 09:20:21.019: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:20:21.019: INFO: Found 0 / 1 Jul 1 09:20:22.020: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:20:22.020: INFO: Found 0 / 1 Jul 1 09:20:23.020: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:20:23.020: INFO: Found 1 / 1 Jul 1 09:20:23.020: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 09:20:23.024: INFO: Selector matched 1 pods for map[app:redis] Jul 1 09:20:23.024: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jul 1 09:20:23.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cws7h redis-master --namespace=e2e-tests-kubectl-q85wc' Jul 1 09:20:23.147: INFO: stderr: "" Jul 1 09:20:23.147: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jul 09:20:22.563 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jul 09:20:22.563 # Server started, Redis version 3.2.12\n1:M 01 Jul 09:20:22.563 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jul 09:20:22.563 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jul 1 09:20:23.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cws7h redis-master --namespace=e2e-tests-kubectl-q85wc --tail=1' Jul 1 09:20:23.254: INFO: stderr: "" Jul 1 09:20:23.254: INFO: stdout: "1:M 01 Jul 09:20:22.563 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jul 1 09:20:23.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cws7h redis-master --namespace=e2e-tests-kubectl-q85wc --limit-bytes=1' Jul 1 09:20:23.401: INFO: stderr: "" Jul 1 09:20:23.401: INFO: stdout: " " STEP: exposing timestamps Jul 1 09:20:23.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cws7h redis-master --namespace=e2e-tests-kubectl-q85wc --tail=1 --timestamps' Jul 1 09:20:23.501: INFO: stderr: "" Jul 1 09:20:23.501: INFO: stdout: "2020-07-01T09:20:22.563538243Z 1:M 01 Jul 09:20:22.563 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jul 1 09:20:26.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cws7h redis-master --namespace=e2e-tests-kubectl-q85wc --since=1s' Jul 1 09:20:26.132: INFO: stderr: "" Jul 1 09:20:26.132: INFO: stdout: "" Jul 1 09:20:26.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cws7h redis-master --namespace=e2e-tests-kubectl-q85wc --since=24h' Jul 1 09:20:26.248: INFO: stderr: "" Jul 1 09:20:26.248: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jul 09:20:22.563 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jul 09:20:22.563 # Server started, Redis version 3.2.12\n1:M 01 Jul 09:20:22.563 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jul 09:20:22.563 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jul 1 09:20:26.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-q85wc' Jul 1 09:20:26.369: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 09:20:26.369: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jul 1 09:20:26.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-q85wc' Jul 1 09:20:26.485: INFO: stderr: "No resources found.\n" Jul 1 09:20:26.485: INFO: stdout: "" Jul 1 09:20:26.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-q85wc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 09:20:26.582: INFO: stderr: "" Jul 1 09:20:26.582: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:20:26.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q85wc" for this suite. Jul 1 09:20:32.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:20:32.844: INFO: namespace: e2e-tests-kubectl-q85wc, resource: bindings, ignored listing per whitelist Jul 1 09:20:32.928: INFO: namespace e2e-tests-kubectl-q85wc deletion completed in 6.342996481s • [SLOW TEST:13.299 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:20:32.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 1 09:20:49.402: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:49.402: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:49.430122 6 log.go:172] (0xc0009e1340) (0xc000907ea0) Create stream I0701 09:20:49.430157 6 log.go:172] (0xc0009e1340) (0xc000907ea0) Stream added, broadcasting: 1 I0701 09:20:49.431801 6 log.go:172] (0xc0009e1340) Reply frame received for 1 I0701 09:20:49.431860 6 log.go:172] (0xc0009e1340) (0xc002198280) Create stream I0701 09:20:49.431876 6 log.go:172] (0xc0009e1340) (0xc002198280) Stream added, broadcasting: 3 I0701 09:20:49.432742 6 log.go:172] (0xc0009e1340) Reply frame received for 3 I0701 09:20:49.432790 6 log.go:172] (0xc0009e1340) (0xc001e09c20) Create stream I0701 09:20:49.432810 6 log.go:172] (0xc0009e1340) (0xc001e09c20) Stream added, broadcasting: 5 I0701 09:20:49.433940 6 log.go:172] (0xc0009e1340) Reply frame received for 5 I0701 09:20:49.503818 6 log.go:172] (0xc0009e1340) Data frame received for 3 I0701 09:20:49.503844 6 log.go:172] (0xc002198280) (3) Data frame handling I0701 09:20:49.503858 6 log.go:172] (0xc002198280) (3) Data frame sent I0701 09:20:49.503868 6 log.go:172] (0xc0009e1340) Data frame received for 3 I0701 09:20:49.503875 6 log.go:172] (0xc002198280) (3) Data frame handling I0701 09:20:49.504215 6 log.go:172] (0xc0009e1340) Data frame received for 5 I0701 09:20:49.504268 6 log.go:172] (0xc001e09c20) (5) Data frame handling I0701 09:20:49.553465 6 log.go:172] (0xc0009e1340) Data frame received for 1 I0701 09:20:49.553513 6 log.go:172] (0xc000907ea0) (1) Data frame handling I0701 09:20:49.553544 6 log.go:172] (0xc000907ea0) (1) Data frame sent I0701 09:20:49.553567 6 log.go:172] (0xc0009e1340) (0xc000907ea0) Stream removed, broadcasting: 1 I0701 09:20:49.553585 6 log.go:172] (0xc0009e1340) Go away received I0701 09:20:49.553701 6 log.go:172] (0xc0009e1340) (0xc000907ea0) Stream removed, broadcasting: 1 I0701 09:20:49.553732 6 log.go:172] (0xc0009e1340) (0xc002198280) Stream removed, broadcasting: 3 I0701 09:20:49.553739 6 log.go:172] (0xc0009e1340) (0xc001e09c20) Stream removed, broadcasting: 5 Jul 1 09:20:49.553: INFO: Exec stderr: "" Jul 1 09:20:49.553: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:49.553: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:49.605939 6 log.go:172] (0xc000a0a4d0) (0xc001e09f40) Create stream I0701 09:20:49.605977 6 log.go:172] (0xc000a0a4d0) (0xc001e09f40) Stream added, broadcasting: 1 I0701 09:20:49.607495 6 log.go:172] (0xc000a0a4d0) Reply frame received for 1 I0701 09:20:49.607542 6 log.go:172] (0xc000a0a4d0) (0xc0027ee8c0) Create stream I0701 09:20:49.607554 6 log.go:172] (0xc000a0a4d0) (0xc0027ee8c0) Stream added, broadcasting: 3 I0701 09:20:49.608361 6 log.go:172] (0xc000a0a4d0) Reply frame received for 3 I0701 09:20:49.608385 6 log.go:172] (0xc000a0a4d0) (0xc002198320) Create stream I0701 09:20:49.608394 6 log.go:172] (0xc000a0a4d0) (0xc002198320) Stream added, broadcasting: 5 I0701 09:20:49.609291 6 log.go:172] (0xc000a0a4d0) Reply frame received for 5 I0701 09:20:49.670681 6 log.go:172] (0xc000a0a4d0) Data frame received for 3 I0701 09:20:49.670715 6 log.go:172] (0xc0027ee8c0) (3) Data frame handling I0701 09:20:49.670731 6 log.go:172] (0xc0027ee8c0) (3) Data frame sent I0701 09:20:49.670740 6 log.go:172] (0xc000a0a4d0) Data frame received for 3 I0701 09:20:49.670752 6 log.go:172] (0xc0027ee8c0) (3) Data frame handling I0701 09:20:49.670775 6 log.go:172] (0xc000a0a4d0) Data frame received for 5 I0701 09:20:49.670785 6 log.go:172] (0xc002198320) (5) Data frame handling I0701 09:20:49.672096 6 log.go:172] (0xc000a0a4d0) Data frame received for 1 I0701 09:20:49.672120 6 log.go:172] (0xc001e09f40) (1) Data frame handling I0701 09:20:49.672133 6 log.go:172] (0xc001e09f40) (1) Data frame sent I0701 09:20:49.672165 6 log.go:172] (0xc000a0a4d0) (0xc001e09f40) Stream removed, broadcasting: 1 I0701 09:20:49.672249 6 log.go:172] (0xc000a0a4d0) (0xc001e09f40) Stream removed, broadcasting: 1 I0701 09:20:49.672281 6 log.go:172] (0xc000a0a4d0) (0xc0027ee8c0) Stream removed, broadcasting: 3 I0701 09:20:49.672398 6 log.go:172] (0xc000a0a4d0) Go away received I0701 09:20:49.672542 6 log.go:172] (0xc000a0a4d0) (0xc002198320) Stream removed, broadcasting: 5 Jul 1 09:20:49.672: INFO: Exec stderr: "" Jul 1 09:20:49.672: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:49.672: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:49.701402 6 log.go:172] (0xc000a0a9a0) (0xc001970280) Create stream I0701 09:20:49.701429 6 log.go:172] (0xc000a0a9a0) (0xc001970280) Stream added, broadcasting: 1 I0701 09:20:49.703714 6 log.go:172] (0xc000a0a9a0) Reply frame received for 1 I0701 09:20:49.703749 6 log.go:172] (0xc000a0a9a0) (0xc0027ee960) Create stream I0701 09:20:49.703760 6 log.go:172] (0xc000a0a9a0) (0xc0027ee960) Stream added, broadcasting: 3 I0701 09:20:49.704631 6 log.go:172] (0xc000a0a9a0) Reply frame received for 3 I0701 09:20:49.704678 6 log.go:172] (0xc000a0a9a0) (0xc0027eea00) Create stream I0701 09:20:49.704690 6 log.go:172] (0xc000a0a9a0) (0xc0027eea00) Stream added, broadcasting: 5 I0701 09:20:49.705640 6 log.go:172] (0xc000a0a9a0) Reply frame received for 5 I0701 09:20:49.765422 6 log.go:172] (0xc000a0a9a0) Data frame received for 5 I0701 09:20:49.765457 6 log.go:172] (0xc0027eea00) (5) Data frame handling I0701 09:20:49.765537 6 log.go:172] (0xc000a0a9a0) Data frame received for 3 I0701 09:20:49.765581 6 log.go:172] (0xc0027ee960) (3) Data frame handling I0701 09:20:49.765703 6 log.go:172] (0xc0027ee960) (3) Data frame sent I0701 09:20:49.765728 6 log.go:172] (0xc000a0a9a0) Data frame received for 3 I0701 09:20:49.765743 6 log.go:172] (0xc0027ee960) (3) Data frame handling I0701 09:20:49.766913 6 log.go:172] (0xc000a0a9a0) Data frame received for 1 I0701 09:20:49.766937 6 log.go:172] (0xc001970280) (1) Data frame handling I0701 09:20:49.766959 6 log.go:172] (0xc001970280) (1) Data frame sent I0701 09:20:49.767040 6 log.go:172] (0xc000a0a9a0) (0xc001970280) Stream removed, broadcasting: 1 I0701 09:20:49.767070 6 log.go:172] (0xc000a0a9a0) Go away received I0701 09:20:49.767163 6 log.go:172] (0xc000a0a9a0) (0xc001970280) Stream removed, broadcasting: 1 I0701 09:20:49.767195 6 log.go:172] (0xc000a0a9a0) (0xc0027ee960) Stream removed, broadcasting: 3 I0701 09:20:49.767212 6 log.go:172] (0xc000a0a9a0) (0xc0027eea00) Stream removed, broadcasting: 5 Jul 1 09:20:49.767: INFO: Exec stderr: "" Jul 1 09:20:49.767: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:49.767: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:49.800390 6 log.go:172] (0xc001dc0370) (0xc0027eec80) Create stream I0701 09:20:49.800421 6 log.go:172] (0xc001dc0370) (0xc0027eec80) Stream added, broadcasting: 1 I0701 09:20:49.802607 6 log.go:172] (0xc001dc0370) Reply frame received for 1 I0701 09:20:49.802670 6 log.go:172] (0xc001dc0370) (0xc001eae0a0) Create stream I0701 09:20:49.802687 6 log.go:172] (0xc001dc0370) (0xc001eae0a0) Stream added, broadcasting: 3 I0701 09:20:49.803790 6 log.go:172] (0xc001dc0370) Reply frame received for 3 I0701 09:20:49.803843 6 log.go:172] (0xc001dc0370) (0xc001eae140) Create stream I0701 09:20:49.803860 6 log.go:172] (0xc001dc0370) (0xc001eae140) Stream added, broadcasting: 5 I0701 09:20:49.804997 6 log.go:172] (0xc001dc0370) Reply frame received for 5 I0701 09:20:49.877085 6 log.go:172] (0xc001dc0370) Data frame received for 5 I0701 09:20:49.877237 6 log.go:172] (0xc001eae140) (5) Data frame handling I0701 09:20:49.877311 6 log.go:172] (0xc001dc0370) Data frame received for 3 I0701 09:20:49.877346 6 log.go:172] (0xc001eae0a0) (3) Data frame handling I0701 09:20:49.877365 6 log.go:172] (0xc001eae0a0) (3) Data frame sent I0701 09:20:49.877379 6 log.go:172] (0xc001dc0370) Data frame received for 3 I0701 09:20:49.877385 6 log.go:172] (0xc001eae0a0) (3) Data frame handling I0701 09:20:49.878319 6 log.go:172] (0xc001dc0370) Data frame received for 1 I0701 09:20:49.878339 6 log.go:172] (0xc0027eec80) (1) Data frame handling I0701 09:20:49.878354 6 log.go:172] (0xc0027eec80) (1) Data frame sent I0701 09:20:49.878365 6 log.go:172] (0xc001dc0370) (0xc0027eec80) Stream removed, broadcasting: 1 I0701 09:20:49.878412 6 log.go:172] (0xc001dc0370) Go away received I0701 09:20:49.878460 6 log.go:172] (0xc001dc0370) (0xc0027eec80) Stream removed, broadcasting: 1 I0701 09:20:49.878482 6 log.go:172] (0xc001dc0370) (0xc001eae0a0) Stream removed, broadcasting: 3 I0701 09:20:49.878497 6 log.go:172] (0xc001dc0370) (0xc001eae140) Stream removed, broadcasting: 5 Jul 1 09:20:49.878: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 1 09:20:49.878: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:49.878: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:49.912977 6 log.go:172] (0xc001dc0840) (0xc0027eee60) Create stream I0701 09:20:49.913031 6 log.go:172] (0xc001dc0840) (0xc0027eee60) Stream added, broadcasting: 1 I0701 09:20:49.915752 6 log.go:172] (0xc001dc0840) Reply frame received for 1 I0701 09:20:49.915821 6 log.go:172] (0xc001dc0840) (0xc0019703c0) Create stream I0701 09:20:49.915841 6 log.go:172] (0xc001dc0840) (0xc0019703c0) Stream added, broadcasting: 3 I0701 09:20:49.917315 6 log.go:172] (0xc001dc0840) Reply frame received for 3 I0701 09:20:49.917375 6 log.go:172] (0xc001dc0840) (0xc001eae1e0) Create stream I0701 09:20:49.917402 6 log.go:172] (0xc001dc0840) (0xc001eae1e0) Stream added, broadcasting: 5 I0701 09:20:49.918521 6 log.go:172] (0xc001dc0840) Reply frame received for 5 I0701 09:20:50.000262 6 log.go:172] (0xc001dc0840) Data frame received for 5 I0701 09:20:50.000284 6 log.go:172] (0xc001eae1e0) (5) Data frame handling I0701 09:20:50.000314 6 log.go:172] (0xc001dc0840) Data frame received for 3 I0701 09:20:50.000354 6 log.go:172] (0xc0019703c0) (3) Data frame handling I0701 09:20:50.000387 6 log.go:172] (0xc0019703c0) (3) Data frame sent I0701 09:20:50.000402 6 log.go:172] (0xc001dc0840) Data frame received for 3 I0701 09:20:50.000413 6 log.go:172] (0xc0019703c0) (3) Data frame handling I0701 09:20:50.001865 6 log.go:172] (0xc001dc0840) Data frame received for 1 I0701 09:20:50.001883 6 log.go:172] (0xc0027eee60) (1) Data frame handling I0701 09:20:50.001893 6 log.go:172] (0xc0027eee60) (1) Data frame sent I0701 09:20:50.001906 6 log.go:172] (0xc001dc0840) (0xc0027eee60) Stream removed, broadcasting: 1 I0701 09:20:50.001968 6 log.go:172] (0xc001dc0840) Go away received I0701 09:20:50.002002 6 log.go:172] (0xc001dc0840) (0xc0027eee60) Stream removed, broadcasting: 1 I0701 09:20:50.002020 6 log.go:172] (0xc001dc0840) (0xc0019703c0) Stream removed, broadcasting: 3 I0701 09:20:50.002031 6 log.go:172] (0xc001dc0840) (0xc001eae1e0) Stream removed, broadcasting: 5 Jul 1 09:20:50.002: INFO: Exec stderr: "" Jul 1 09:20:50.002: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:50.002: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:50.024717 6 log.go:172] (0xc0001da2c0) (0xc001be9900) Create stream I0701 09:20:50.024751 6 log.go:172] (0xc0001da2c0) (0xc001be9900) Stream added, broadcasting: 1 I0701 09:20:50.026995 6 log.go:172] (0xc0001da2c0) Reply frame received for 1 I0701 09:20:50.027044 6 log.go:172] (0xc0001da2c0) (0xc0021983c0) Create stream I0701 09:20:50.027059 6 log.go:172] (0xc0001da2c0) (0xc0021983c0) Stream added, broadcasting: 3 I0701 09:20:50.027975 6 log.go:172] (0xc0001da2c0) Reply frame received for 3 I0701 09:20:50.028005 6 log.go:172] (0xc0001da2c0) (0xc001be99a0) Create stream I0701 09:20:50.028019 6 log.go:172] (0xc0001da2c0) (0xc001be99a0) Stream added, broadcasting: 5 I0701 09:20:50.029289 6 log.go:172] (0xc0001da2c0) Reply frame received for 5 I0701 09:20:50.092203 6 log.go:172] (0xc0001da2c0) Data frame received for 5 I0701 09:20:50.092250 6 log.go:172] (0xc0001da2c0) Data frame received for 3 I0701 09:20:50.092413 6 log.go:172] (0xc0021983c0) (3) Data frame handling I0701 09:20:50.092432 6 log.go:172] (0xc0021983c0) (3) Data frame sent I0701 09:20:50.092441 6 log.go:172] (0xc0001da2c0) Data frame received for 3 I0701 09:20:50.092448 6 log.go:172] (0xc0021983c0) (3) Data frame handling I0701 09:20:50.092509 6 log.go:172] (0xc001be99a0) (5) Data frame handling I0701 09:20:50.094236 6 log.go:172] (0xc0001da2c0) Data frame received for 1 I0701 09:20:50.094256 6 log.go:172] (0xc001be9900) (1) Data frame handling I0701 09:20:50.094268 6 log.go:172] (0xc001be9900) (1) Data frame sent I0701 09:20:50.094313 6 log.go:172] (0xc0001da2c0) (0xc001be9900) Stream removed, broadcasting: 1 I0701 09:20:50.094405 6 log.go:172] (0xc0001da2c0) Go away received I0701 09:20:50.094448 6 log.go:172] (0xc0001da2c0) (0xc001be9900) Stream removed, broadcasting: 1 I0701 09:20:50.094492 6 log.go:172] (0xc0001da2c0) (0xc0021983c0) Stream removed, broadcasting: 3 I0701 09:20:50.094509 6 log.go:172] (0xc0001da2c0) (0xc001be99a0) Stream removed, broadcasting: 5 Jul 1 09:20:50.094: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 1 09:20:50.094: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:50.094: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:50.122628 6 log.go:172] (0xc001dc0d10) (0xc0027ef0e0) Create stream I0701 09:20:50.122684 6 log.go:172] (0xc001dc0d10) (0xc0027ef0e0) Stream added, broadcasting: 1 I0701 09:20:50.125767 6 log.go:172] (0xc001dc0d10) Reply frame received for 1 I0701 09:20:50.125813 6 log.go:172] (0xc001dc0d10) (0xc001be9c20) Create stream I0701 09:20:50.125831 6 log.go:172] (0xc001dc0d10) (0xc001be9c20) Stream added, broadcasting: 3 I0701 09:20:50.126583 6 log.go:172] (0xc001dc0d10) Reply frame received for 3 I0701 09:20:50.126618 6 log.go:172] (0xc001dc0d10) (0xc0027ef220) Create stream I0701 09:20:50.126629 6 log.go:172] (0xc001dc0d10) (0xc0027ef220) Stream added, broadcasting: 5 I0701 09:20:50.127401 6 log.go:172] (0xc001dc0d10) Reply frame received for 5 I0701 09:20:50.186069 6 log.go:172] (0xc001dc0d10) Data frame received for 3 I0701 09:20:50.186117 6 log.go:172] (0xc001be9c20) (3) Data frame handling I0701 09:20:50.186150 6 log.go:172] (0xc001dc0d10) Data frame received for 5 I0701 09:20:50.186195 6 log.go:172] (0xc0027ef220) (5) Data frame handling I0701 09:20:50.186229 6 log.go:172] (0xc001be9c20) (3) Data frame sent I0701 09:20:50.186251 6 log.go:172] (0xc001dc0d10) Data frame received for 3 I0701 09:20:50.186267 6 log.go:172] (0xc001be9c20) (3) Data frame handling I0701 09:20:50.187793 6 log.go:172] (0xc001dc0d10) Data frame received for 1 I0701 09:20:50.187809 6 log.go:172] (0xc0027ef0e0) (1) Data frame handling I0701 09:20:50.187827 6 log.go:172] (0xc0027ef0e0) (1) Data frame sent I0701 09:20:50.187961 6 log.go:172] (0xc001dc0d10) (0xc0027ef0e0) Stream removed, broadcasting: 1 I0701 09:20:50.188045 6 log.go:172] (0xc001dc0d10) Go away received I0701 09:20:50.188087 6 log.go:172] (0xc001dc0d10) (0xc0027ef0e0) Stream removed, broadcasting: 1 I0701 09:20:50.188111 6 log.go:172] (0xc001dc0d10) (0xc001be9c20) Stream removed, broadcasting: 3 I0701 09:20:50.188127 6 log.go:172] (0xc001dc0d10) (0xc0027ef220) Stream removed, broadcasting: 5 Jul 1 09:20:50.188: INFO: Exec stderr: "" Jul 1 09:20:50.188: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:50.188: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:50.215831 6 log.go:172] (0xc001dc11e0) (0xc0027ef540) Create stream I0701 09:20:50.215865 6 log.go:172] (0xc001dc11e0) (0xc0027ef540) Stream added, broadcasting: 1 I0701 09:20:50.217842 6 log.go:172] (0xc001dc11e0) Reply frame received for 1 I0701 09:20:50.217866 6 log.go:172] (0xc001dc11e0) (0xc0027ef5e0) Create stream I0701 09:20:50.217873 6 log.go:172] (0xc001dc11e0) (0xc0027ef5e0) Stream added, broadcasting: 3 I0701 09:20:50.218554 6 log.go:172] (0xc001dc11e0) Reply frame received for 3 I0701 09:20:50.218582 6 log.go:172] (0xc001dc11e0) (0xc002198460) Create stream I0701 09:20:50.218593 6 log.go:172] (0xc001dc11e0) (0xc002198460) Stream added, broadcasting: 5 I0701 09:20:50.219221 6 log.go:172] (0xc001dc11e0) Reply frame received for 5 I0701 09:20:50.289507 6 log.go:172] (0xc001dc11e0) Data frame received for 5 I0701 09:20:50.289553 6 log.go:172] (0xc002198460) (5) Data frame handling I0701 09:20:50.289585 6 log.go:172] (0xc001dc11e0) Data frame received for 3 I0701 09:20:50.289602 6 log.go:172] (0xc0027ef5e0) (3) Data frame handling I0701 09:20:50.289619 6 log.go:172] (0xc0027ef5e0) (3) Data frame sent I0701 09:20:50.289640 6 log.go:172] (0xc001dc11e0) Data frame received for 3 I0701 09:20:50.289653 6 log.go:172] (0xc0027ef5e0) (3) Data frame handling I0701 09:20:50.291183 6 log.go:172] (0xc001dc11e0) Data frame received for 1 I0701 09:20:50.291210 6 log.go:172] (0xc0027ef540) (1) Data frame handling I0701 09:20:50.291250 6 log.go:172] (0xc0027ef540) (1) Data frame sent I0701 09:20:50.291275 6 log.go:172] (0xc001dc11e0) (0xc0027ef540) Stream removed, broadcasting: 1 I0701 09:20:50.291363 6 log.go:172] (0xc001dc11e0) Go away received I0701 09:20:50.291387 6 log.go:172] (0xc001dc11e0) (0xc0027ef540) Stream removed, broadcasting: 1 I0701 09:20:50.291406 6 log.go:172] (0xc001dc11e0) (0xc0027ef5e0) Stream removed, broadcasting: 3 I0701 09:20:50.291489 6 log.go:172] (0xc001dc11e0) (0xc002198460) Stream removed, broadcasting: 5 Jul 1 09:20:50.291: INFO: Exec stderr: "" Jul 1 09:20:50.291: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:50.291: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:50.318097 6 log.go:172] (0xc001dc16b0) (0xc0027ef9a0) Create stream I0701 09:20:50.318124 6 log.go:172] (0xc001dc16b0) (0xc0027ef9a0) Stream added, broadcasting: 1 I0701 09:20:50.319986 6 log.go:172] (0xc001dc16b0) Reply frame received for 1 I0701 09:20:50.320024 6 log.go:172] (0xc001dc16b0) (0xc0027efa40) Create stream I0701 09:20:50.320038 6 log.go:172] (0xc001dc16b0) (0xc0027efa40) Stream added, broadcasting: 3 I0701 09:20:50.320847 6 log.go:172] (0xc001dc16b0) Reply frame received for 3 I0701 09:20:50.320882 6 log.go:172] (0xc001dc16b0) (0xc0027efb80) Create stream I0701 09:20:50.320898 6 log.go:172] (0xc001dc16b0) (0xc0027efb80) Stream added, broadcasting: 5 I0701 09:20:50.321871 6 log.go:172] (0xc001dc16b0) Reply frame received for 5 I0701 09:20:50.380766 6 log.go:172] (0xc001dc16b0) Data frame received for 3 I0701 09:20:50.380806 6 log.go:172] (0xc0027efa40) (3) Data frame handling I0701 09:20:50.380817 6 log.go:172] (0xc0027efa40) (3) Data frame sent I0701 09:20:50.380828 6 log.go:172] (0xc001dc16b0) Data frame received for 3 I0701 09:20:50.380832 6 log.go:172] (0xc0027efa40) (3) Data frame handling I0701 09:20:50.380860 6 log.go:172] (0xc001dc16b0) Data frame received for 5 I0701 09:20:50.380902 6 log.go:172] (0xc0027efb80) (5) Data frame handling I0701 09:20:50.382630 6 log.go:172] (0xc001dc16b0) Data frame received for 1 I0701 09:20:50.382643 6 log.go:172] (0xc0027ef9a0) (1) Data frame handling I0701 09:20:50.382651 6 log.go:172] (0xc0027ef9a0) (1) Data frame sent I0701 09:20:50.382659 6 log.go:172] (0xc001dc16b0) (0xc0027ef9a0) Stream removed, broadcasting: 1 I0701 09:20:50.382706 6 log.go:172] (0xc001dc16b0) Go away received I0701 09:20:50.382745 6 log.go:172] (0xc001dc16b0) (0xc0027ef9a0) Stream removed, broadcasting: 1 I0701 09:20:50.382768 6 log.go:172] (0xc001dc16b0) (0xc0027efa40) Stream removed, broadcasting: 3 I0701 09:20:50.382785 6 log.go:172] (0xc001dc16b0) (0xc0027efb80) Stream removed, broadcasting: 5 Jul 1 09:20:50.382: INFO: Exec stderr: "" Jul 1 09:20:50.382: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-62dlz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 09:20:50.382: INFO: >>> kubeConfig: /root/.kube/config I0701 09:20:50.417907 6 log.go:172] (0xc000cd20b0) (0xc002198aa0) Create stream I0701 09:20:50.417935 6 log.go:172] (0xc000cd20b0) (0xc002198aa0) Stream added, broadcasting: 1 I0701 09:20:50.421502 6 log.go:172] (0xc000cd20b0) Reply frame received for 1 I0701 09:20:50.421542 6 log.go:172] (0xc000cd20b0) (0xc0027efcc0) Create stream I0701 09:20:50.421556 6 log.go:172] (0xc000cd20b0) (0xc0027efcc0) Stream added, broadcasting: 3 I0701 09:20:50.422691 6 log.go:172] (0xc000cd20b0) Reply frame received for 3 I0701 09:20:50.422748 6 log.go:172] (0xc000cd20b0) (0xc001be9cc0) Create stream I0701 09:20:50.422767 6 log.go:172] (0xc000cd20b0) (0xc001be9cc0) Stream added, broadcasting: 5 I0701 09:20:50.423713 6 log.go:172] (0xc000cd20b0) Reply frame received for 5 I0701 09:20:50.489770 6 log.go:172] (0xc000cd20b0) Data frame received for 3 I0701 09:20:50.489825 6 log.go:172] (0xc0027efcc0) (3) Data frame handling I0701 09:20:50.489860 6 log.go:172] (0xc0027efcc0) (3) Data frame sent I0701 09:20:50.489889 6 log.go:172] (0xc000cd20b0) Data frame received for 3 I0701 09:20:50.489905 6 log.go:172] (0xc0027efcc0) (3) Data frame handling I0701 09:20:50.489939 6 log.go:172] (0xc000cd20b0) Data frame received for 5 I0701 09:20:50.489976 6 log.go:172] (0xc001be9cc0) (5) Data frame handling I0701 09:20:50.491723 6 log.go:172] (0xc000cd20b0) Data frame received for 1 I0701 09:20:50.491740 6 log.go:172] (0xc002198aa0) (1) Data frame handling I0701 09:20:50.491750 6 log.go:172] (0xc002198aa0) (1) Data frame sent I0701 09:20:50.491784 6 log.go:172] (0xc000cd20b0) (0xc002198aa0) Stream removed, broadcasting: 1 I0701 09:20:50.491800 6 log.go:172] (0xc000cd20b0) Go away received I0701 09:20:50.491993 6 log.go:172] (0xc000cd20b0) (0xc002198aa0) Stream removed, broadcasting: 1 I0701 09:20:50.492028 6 log.go:172] (0xc000cd20b0) (0xc0027efcc0) Stream removed, broadcasting: 3 I0701 09:20:50.492056 6 log.go:172] (0xc000cd20b0) (0xc001be9cc0) Stream removed, broadcasting: 5 Jul 1 09:20:50.492: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:20:50.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-62dlz" for this suite. Jul 1 09:21:52.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:21:52.554: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-62dlz, resource: bindings, ignored listing per whitelist Jul 1 09:21:52.584: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-62dlz deletion completed in 1m2.087503629s • [SLOW TEST:79.656 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:21:52.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:21:52.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-t8hw2" for this suite. Jul 1 09:22:15.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:22:15.543: INFO: namespace: e2e-tests-pods-t8hw2, resource: bindings, ignored listing per whitelist Jul 1 09:22:15.544: INFO: namespace e2e-tests-pods-t8hw2 deletion completed in 22.791353321s • [SLOW TEST:22.960 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:22:15.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 09:22:16.419: INFO: Creating deployment "test-recreate-deployment" Jul 1 09:22:16.469: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 1 09:22:16.513: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jul 1 09:22:18.799: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 1 09:22:18.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192136, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:22:21.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192136, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:22:23.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192136, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:22:24.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192137, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729192136, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 09:22:26.806: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 1 09:22:26.815: INFO: Updating deployment test-recreate-deployment Jul 1 09:22:26.815: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 1 09:22:27.508: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-gd2n6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd2n6/deployments/test-recreate-deployment,UID:5aef6d34-bb7c-11ea-99e8-0242ac110002,ResourceVersion:18841505,Generation:2,CreationTimestamp:2020-07-01 09:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-01 09:22:27 +0000 UTC 2020-07-01 09:22:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-01 09:22:27 +0000 UTC 2020-07-01 09:22:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jul 1 09:22:27.512: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-gd2n6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd2n6/replicasets/test-recreate-deployment-589c4bfd,UID:6139595d-bb7c-11ea-99e8-0242ac110002,ResourceVersion:18841503,Generation:1,CreationTimestamp:2020-07-01 09:22:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5aef6d34-bb7c-11ea-99e8-0242ac110002 0xc00149d71f 0xc00149d780}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 09:22:27.512: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 1 09:22:27.512: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-gd2n6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd2n6/replicasets/test-recreate-deployment-5bf7f65dc,UID:5afdb1ac-bb7c-11ea-99e8-0242ac110002,ResourceVersion:18841492,Generation:2,CreationTimestamp:2020-07-01 09:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5aef6d34-bb7c-11ea-99e8-0242ac110002 0xc00149d8d0 0xc00149d8d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 1 09:22:27.515: INFO: Pod "test-recreate-deployment-589c4bfd-2w7ct" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2w7ct,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-gd2n6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gd2n6/pods/test-recreate-deployment-589c4bfd-2w7ct,UID:61425e94-bb7c-11ea-99e8-0242ac110002,ResourceVersion:18841504,Generation:0,CreationTimestamp:2020-07-01 09:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 6139595d-bb7c-11ea-99e8-0242ac110002 0xc00188afff 0xc00188b020}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4cx7f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4cx7f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4cx7f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00188b0c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00188b160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:22:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 09:22:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-01 09:22:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:22:27.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gd2n6" for this suite. Jul 1 09:22:33.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:22:33.634: INFO: namespace: e2e-tests-deployment-gd2n6, resource: bindings, ignored listing per whitelist Jul 1 09:22:33.674: INFO: namespace e2e-tests-deployment-gd2n6 deletion completed in 6.156104122s • [SLOW TEST:18.130 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:22:33.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jul 1 09:22:33.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jul 1 09:22:37.124: INFO: stderr: "" Jul 1 09:22:37.124: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:22:37.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-86tp9" for this suite. Jul 1 09:22:43.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:22:43.270: INFO: namespace: e2e-tests-kubectl-86tp9, resource: bindings, ignored listing per whitelist Jul 1 09:22:43.290: INFO: namespace e2e-tests-kubectl-86tp9 deletion completed in 6.160236348s • [SLOW TEST:9.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 09:22:43.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 1 09:22:43.418: INFO: Waiting up to 5m0s for pod "pod-6b0514b2-bb7c-11ea-a133-0242ac110018" in namespace "e2e-tests-emptydir-hmh9k" to be "success or failure" Jul 1 09:22:43.421: INFO: Pod "pod-6b0514b2-bb7c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649309ms Jul 1 09:22:45.426: INFO: Pod "pod-6b0514b2-bb7c-11ea-a133-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008461908s Jul 1 09:22:47.429: INFO: Pod "pod-6b0514b2-bb7c-11ea-a133-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011752757s STEP: Saw pod success Jul 1 09:22:47.429: INFO: Pod "pod-6b0514b2-bb7c-11ea-a133-0242ac110018" satisfied condition "success or failure" Jul 1 09:22:47.432: INFO: Trying to get logs from node hunter-worker pod pod-6b0514b2-bb7c-11ea-a133-0242ac110018 container test-container: STEP: delete the pod Jul 1 09:22:47.452: INFO: Waiting for pod pod-6b0514b2-bb7c-11ea-a133-0242ac110018 to disappear Jul 1 09:22:47.456: INFO: Pod pod-6b0514b2-bb7c-11ea-a133-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 09:22:47.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hmh9k" for this suite. Jul 1 09:22:53.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 09:22:53.514: INFO: namespace: e2e-tests-emptydir-hmh9k, resource: bindings, ignored listing per whitelist Jul 1 09:22:53.537: INFO: namespace e2e-tests-emptydir-hmh9k deletion completed in 6.078436598s • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSJul 1 09:22:53.537: INFO: Running AfterSuite actions on all nodes Jul 1 09:22:53.537: INFO: Running AfterSuite actions on node 1 Jul 1 09:22:53.537: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6672.366 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS