I0622 10:46:54.647084 7 e2e.go:224] Starting e2e run "afcb97dc-b475-11ea-8cd8-0242ac11001b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1592822814 - Will randomize all specs Will run 201 of 2164 specs Jun 22 10:46:54.843: INFO: >>> kubeConfig: /root/.kube/config Jun 22 10:46:54.846: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 22 10:46:54.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 22 10:46:54.913: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 22 10:46:54.913: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 22 10:46:54.913: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 22 10:46:54.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 22 10:46:54.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 22 10:46:54.926: INFO: e2e test version: v1.13.12 Jun 22 10:46:54.927: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:46:54.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jun 22 10:46:55.121: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jun 22 10:46:55.124: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:46:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7j4w7" for this suite. Jun 22 10:47:01.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:47:01.321: INFO: namespace: e2e-tests-kubectl-7j4w7, resource: bindings, ignored listing per whitelist Jun 22 10:47:01.338: INFO: namespace e2e-tests-kubectl-7j4w7 deletion completed in 6.119243071s • [SLOW TEST:6.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:47:01.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 22 10:47:01.527: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:47:08.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-fdtdm" for this suite. Jun 22 10:47:17.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:47:17.105: INFO: namespace: e2e-tests-init-container-fdtdm, resource: bindings, ignored listing per whitelist Jun 22 10:47:17.148: INFO: namespace e2e-tests-init-container-fdtdm deletion completed in 8.161521633s • [SLOW TEST:15.810 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:47:17.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jun 22 10:47:17.799: INFO: created pod pod-service-account-defaultsa Jun 22 10:47:17.800: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 22 10:47:17.868: INFO: created pod pod-service-account-mountsa Jun 22 10:47:17.868: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 22 10:47:17.914: INFO: created pod pod-service-account-nomountsa Jun 22 10:47:17.914: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 22 10:47:17.961: INFO: created pod pod-service-account-defaultsa-mountspec Jun 22 10:47:17.961: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 22 10:47:18.011: INFO: created pod pod-service-account-mountsa-mountspec Jun 22 10:47:18.011: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 22 10:47:18.105: INFO: created pod pod-service-account-nomountsa-mountspec Jun 22 10:47:18.105: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 22 10:47:18.124: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 22 10:47:18.124: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 22 10:47:18.197: INFO: created pod pod-service-account-mountsa-nomountspec Jun 22 10:47:18.198: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 22 10:47:18.303: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 22 10:47:18.303: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:47:18.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-qv8sf" for this suite. Jun 22 10:47:48.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:47:48.412: INFO: namespace: e2e-tests-svcaccounts-qv8sf, resource: bindings, ignored listing per whitelist Jun 22 10:47:48.458: INFO: namespace e2e-tests-svcaccounts-qv8sf deletion completed in 30.128700903s • [SLOW TEST:31.309 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:47:48.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d034e1e0-b475-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 10:47:48.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-phws8" to be "success or failure" Jun 22 10:47:48.908: INFO: Pod "pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 338.734229ms Jun 22 10:47:50.913: INFO: Pod "pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343894637s Jun 22 10:47:52.917: INFO: Pod "pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34754577s Jun 22 10:47:54.926: INFO: Pod "pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.357051729s STEP: Saw pod success Jun 22 10:47:54.927: INFO: Pod "pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 10:47:54.930: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 10:47:54.970: INFO: Waiting for pod pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b to disappear Jun 22 10:47:54.980: INFO: Pod pod-configmaps-d0367d80-b475-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:47:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-phws8" for this suite. Jun 22 10:48:00.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:48:01.022: INFO: namespace: e2e-tests-configmap-phws8, resource: bindings, ignored listing per whitelist Jun 22 10:48:01.068: INFO: namespace e2e-tests-configmap-phws8 deletion completed in 6.084406865s • [SLOW TEST:12.611 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:48:01.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-mfm45/secret-test-d7bc0693-b475-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 10:48:01.205: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-mfm45" to be "success or failure" Jun 22 10:48:01.209: INFO: Pod "pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.615254ms Jun 22 10:48:03.269: INFO: Pod "pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06419434s Jun 22 10:48:05.273: INFO: Pod "pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068574524s STEP: Saw pod success Jun 22 10:48:05.274: INFO: Pod "pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 10:48:05.277: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b container env-test: STEP: delete the pod Jun 22 10:48:05.459: INFO: Waiting for pod pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b to disappear Jun 22 10:48:05.462: INFO: Pod pod-configmaps-d7be5144-b475-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:48:05.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mfm45" for this suite. Jun 22 10:48:11.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:48:11.559: INFO: namespace: e2e-tests-secrets-mfm45, resource: bindings, ignored listing per whitelist Jun 22 10:48:11.623: INFO: namespace e2e-tests-secrets-mfm45 deletion completed in 6.157698624s • [SLOW TEST:10.555 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:48:11.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0622 10:48:52.438018 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 10:48:52.438: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:48:52.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zqgxh" for this suite. Jun 22 10:49:00.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:49:00.500: INFO: namespace: e2e-tests-gc-zqgxh, resource: bindings, ignored listing per whitelist Jun 22 10:49:00.521: INFO: namespace e2e-tests-gc-zqgxh deletion completed in 8.080263543s • [SLOW TEST:48.897 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:49:00.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 10:49:00.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jvfct' Jun 22 10:49:05.004: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 10:49:05.004: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 22 10:49:07.036: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-mcttk] Jun 22 10:49:07.036: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-mcttk" in namespace "e2e-tests-kubectl-jvfct" to be "running and ready" Jun 22 10:49:07.040: INFO: Pod "e2e-test-nginx-rc-mcttk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302044ms Jun 22 10:49:09.044: INFO: Pod "e2e-test-nginx-rc-mcttk": Phase="Running", Reason="", readiness=true. Elapsed: 2.00771313s Jun 22 10:49:09.044: INFO: Pod "e2e-test-nginx-rc-mcttk" satisfied condition "running and ready" Jun 22 10:49:09.044: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-mcttk] Jun 22 10:49:09.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jvfct' Jun 22 10:49:09.170: INFO: stderr: "" Jun 22 10:49:09.170: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jun 22 10:49:09.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jvfct' Jun 22 10:49:09.290: INFO: stderr: "" Jun 22 10:49:09.291: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:49:09.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jvfct" for this suite. Jun 22 10:49:31.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:49:31.376: INFO: namespace: e2e-tests-kubectl-jvfct, resource: bindings, ignored listing per whitelist Jun 22 10:49:31.437: INFO: namespace e2e-tests-kubectl-jvfct deletion completed in 22.130285186s • [SLOW TEST:30.916 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:49:31.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 22 10:49:31.519: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:49:37.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-6cgzp" for this suite. Jun 22 10:49:43.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:49:43.352: INFO: namespace: e2e-tests-init-container-6cgzp, resource: bindings, ignored listing per whitelist Jun 22 10:49:43.375: INFO: namespace e2e-tests-init-container-6cgzp deletion completed in 6.098673234s • [SLOW TEST:11.938 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:49:43.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 10:49:43.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-ndw8b" to be "success or failure" Jun 22 10:49:43.499: INFO: Pod "downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206223ms Jun 22 10:49:45.503: INFO: Pod "downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007183736s Jun 22 10:49:47.506: INFO: Pod "downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010422572s STEP: Saw pod success Jun 22 10:49:47.506: INFO: Pod "downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 10:49:47.508: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 10:49:47.564: INFO: Waiting for pod downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b to disappear Jun 22 10:49:47.571: INFO: Pod downwardapi-volume-14b5a2e5-b476-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:49:47.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ndw8b" for this suite. Jun 22 10:49:53.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:49:53.606: INFO: namespace: e2e-tests-downward-api-ndw8b, resource: bindings, ignored listing per whitelist Jun 22 10:49:53.662: INFO: namespace e2e-tests-downward-api-ndw8b deletion completed in 6.089066966s • [SLOW TEST:10.287 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:49:53.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-1add8b63-b476-11ea-8cd8-0242ac11001b STEP: Creating configMap with name cm-test-opt-upd-1add8bb9-b476-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1add8b63-b476-11ea-8cd8-0242ac11001b STEP: Updating configmap cm-test-opt-upd-1add8bb9-b476-11ea-8cd8-0242ac11001b STEP: Creating configMap with name cm-test-opt-create-1add8bdd-b476-11ea-8cd8-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:51:26.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kprpn" for this suite. Jun 22 10:51:48.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:51:48.655: INFO: namespace: e2e-tests-configmap-kprpn, resource: bindings, ignored listing per whitelist Jun 22 10:51:48.686: INFO: namespace e2e-tests-configmap-kprpn deletion completed in 22.102230298s • [SLOW TEST:115.023 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:51:48.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 10:51:48.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-qwb9d" to be "success or failure" Jun 22 10:51:48.856: INFO: Pod "downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.320604ms Jun 22 10:51:50.860: INFO: Pod "downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055138152s Jun 22 10:51:52.864: INFO: Pod "downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.059357837s Jun 22 10:51:54.867: INFO: Pod "downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061856672s STEP: Saw pod success Jun 22 10:51:54.867: INFO: Pod "downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 10:51:54.869: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 10:51:54.934: INFO: Waiting for pod downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b to disappear Jun 22 10:51:54.944: INFO: Pod downwardapi-volume-5f648fa1-b476-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:51:54.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qwb9d" for this suite. Jun 22 10:52:00.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:52:00.983: INFO: namespace: e2e-tests-projected-qwb9d, resource: bindings, ignored listing per whitelist Jun 22 10:52:01.030: INFO: namespace e2e-tests-projected-qwb9d deletion completed in 6.082878248s • [SLOW TEST:12.344 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:52:01.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-426gn Jun 22 10:52:05.225: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-426gn STEP: checking the pod's current state and verifying that restartCount is present Jun 22 10:52:05.228: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:56:06.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-426gn" for this suite. Jun 22 10:56:12.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:56:13.097: INFO: namespace: e2e-tests-container-probe-426gn, resource: bindings, ignored listing per whitelist Jun 22 10:56:13.097: INFO: namespace e2e-tests-container-probe-426gn deletion completed in 6.156814013s • [SLOW TEST:252.067 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:56:13.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-lvt2 STEP: Creating a pod to test atomic-volume-subpath Jun 22 10:56:13.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lvt2" in namespace "e2e-tests-subpath-992q2" to be "success or failure" Jun 22 10:56:13.431: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Pending", Reason="", readiness=false. Elapsed: 147.084914ms Jun 22 10:56:15.435: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15121011s Jun 22 10:56:17.437: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153795029s Jun 22 10:56:19.441: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157468119s Jun 22 10:56:22.354: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=true. Elapsed: 9.070866595s Jun 22 10:56:24.358: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 11.074776887s Jun 22 10:56:26.363: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 13.079052334s Jun 22 10:56:28.367: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 15.083815537s Jun 22 10:56:30.372: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 17.088544978s Jun 22 10:56:32.376: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 19.092796594s Jun 22 10:56:34.381: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 21.097002297s Jun 22 10:56:36.386: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 23.102036662s Jun 22 10:56:38.390: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Running", Reason="", readiness=false. Elapsed: 25.106290595s Jun 22 10:56:40.394: INFO: Pod "pod-subpath-test-downwardapi-lvt2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.110036315s STEP: Saw pod success Jun 22 10:56:40.394: INFO: Pod "pod-subpath-test-downwardapi-lvt2" satisfied condition "success or failure" Jun 22 10:56:40.396: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-lvt2 container test-container-subpath-downwardapi-lvt2: STEP: delete the pod Jun 22 10:56:40.468: INFO: Waiting for pod pod-subpath-test-downwardapi-lvt2 to disappear Jun 22 10:56:40.483: INFO: Pod pod-subpath-test-downwardapi-lvt2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lvt2 Jun 22 10:56:40.483: INFO: Deleting pod "pod-subpath-test-downwardapi-lvt2" in namespace "e2e-tests-subpath-992q2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:56:40.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-992q2" for this suite. Jun 22 10:56:46.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:56:46.592: INFO: namespace: e2e-tests-subpath-992q2, resource: bindings, ignored listing per whitelist Jun 22 10:56:46.630: INFO: namespace e2e-tests-subpath-992q2 deletion completed in 6.142062793s • [SLOW TEST:33.533 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:56:46.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jun 22 10:56:46.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:47.515: INFO: stderr: "" Jun 22 10:56:47.515: INFO: stdout: "pod/pause created\n" Jun 22 10:56:47.515: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 22 10:56:47.515: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-26s9x" to be "running and ready" Jun 22 10:56:47.563: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 47.528005ms Jun 22 10:56:49.762: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246597038s Jun 22 10:56:51.765: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.250105256s Jun 22 10:56:51.765: INFO: Pod "pause" satisfied condition "running and ready" Jun 22 10:56:51.765: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jun 22 10:56:51.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:51.874: INFO: stderr: "" Jun 22 10:56:51.874: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 22 10:56:51.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:52.050: INFO: stderr: "" Jun 22 10:56:52.050: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 22 10:56:52.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:52.160: INFO: stderr: "" Jun 22 10:56:52.160: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 22 10:56:52.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:52.282: INFO: stderr: "" Jun 22 10:56:52.282: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jun 22 10:56:52.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:52.498: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 10:56:52.498: INFO: stdout: "pod \"pause\" force deleted\n" Jun 22 10:56:52.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-26s9x' Jun 22 10:56:52.606: INFO: stderr: "No resources found.\n" Jun 22 10:56:52.606: INFO: stdout: "" Jun 22 10:56:52.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-26s9x -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 10:56:52.709: INFO: stderr: "" Jun 22 10:56:52.709: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:56:52.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-26s9x" for this suite. Jun 22 10:56:58.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:56:59.030: INFO: namespace: e2e-tests-kubectl-26s9x, resource: bindings, ignored listing per whitelist Jun 22 10:56:59.037: INFO: namespace e2e-tests-kubectl-26s9x deletion completed in 6.324330012s • [SLOW TEST:12.406 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:56:59.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jun 22 10:57:03.366: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:57:27.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-78ksm" for this suite. Jun 22 10:57:33.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:57:33.540: INFO: namespace: e2e-tests-namespaces-78ksm, resource: bindings, ignored listing per whitelist Jun 22 10:57:33.570: INFO: namespace e2e-tests-namespaces-78ksm deletion completed in 6.078391935s STEP: Destroying namespace "e2e-tests-nsdeletetest-nn2g7" for this suite. Jun 22 10:57:33.572: INFO: Namespace e2e-tests-nsdeletetest-nn2g7 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4f9kd" for this suite. Jun 22 10:57:41.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:57:41.939: INFO: namespace: e2e-tests-nsdeletetest-4f9kd, resource: bindings, ignored listing per whitelist Jun 22 10:57:41.978: INFO: namespace e2e-tests-nsdeletetest-4f9kd deletion completed in 8.405594791s • [SLOW TEST:42.941 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:57:41.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 22 10:57:42.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:42.398: INFO: stderr: "" Jun 22 10:57:42.398: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 10:57:42.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:42.576: INFO: stderr: "" Jun 22 10:57:42.576: INFO: stdout: "update-demo-nautilus-xtlmh update-demo-nautilus-z4tp9 " Jun 22 10:57:42.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:43.140: INFO: stderr: "" Jun 22 10:57:43.140: INFO: stdout: "" Jun 22 10:57:43.141: INFO: update-demo-nautilus-xtlmh is created but not running Jun 22 10:57:48.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:48.246: INFO: stderr: "" Jun 22 10:57:48.246: INFO: stdout: "update-demo-nautilus-xtlmh update-demo-nautilus-z4tp9 " Jun 22 10:57:48.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:48.354: INFO: stderr: "" Jun 22 10:57:48.354: INFO: stdout: "true" Jun 22 10:57:48.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:48.448: INFO: stderr: "" Jun 22 10:57:48.448: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 10:57:48.448: INFO: validating pod update-demo-nautilus-xtlmh Jun 22 10:57:48.461: INFO: got data: { "image": "nautilus.jpg" } Jun 22 10:57:48.462: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 10:57:48.462: INFO: update-demo-nautilus-xtlmh is verified up and running Jun 22 10:57:48.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4tp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:48.557: INFO: stderr: "" Jun 22 10:57:48.557: INFO: stdout: "true" Jun 22 10:57:48.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4tp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:48.660: INFO: stderr: "" Jun 22 10:57:48.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 10:57:48.660: INFO: validating pod update-demo-nautilus-z4tp9 Jun 22 10:57:48.681: INFO: got data: { "image": "nautilus.jpg" } Jun 22 10:57:48.681: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 10:57:48.681: INFO: update-demo-nautilus-z4tp9 is verified up and running STEP: scaling down the replication controller Jun 22 10:57:48.702: INFO: scanned /root for discovery docs: Jun 22 10:57:48.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:49.860: INFO: stderr: "" Jun 22 10:57:49.860: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 10:57:49.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:49.974: INFO: stderr: "" Jun 22 10:57:49.974: INFO: stdout: "update-demo-nautilus-xtlmh update-demo-nautilus-z4tp9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 10:57:54.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:57:55.090: INFO: stderr: "" Jun 22 10:57:55.091: INFO: stdout: "update-demo-nautilus-xtlmh update-demo-nautilus-z4tp9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 10:58:00.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:00.205: INFO: stderr: "" Jun 22 10:58:00.205: INFO: stdout: "update-demo-nautilus-xtlmh update-demo-nautilus-z4tp9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 10:58:05.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:05.302: INFO: stderr: "" Jun 22 10:58:05.302: INFO: stdout: "update-demo-nautilus-xtlmh " Jun 22 10:58:05.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:05.391: INFO: stderr: "" Jun 22 10:58:05.391: INFO: stdout: "true" Jun 22 10:58:05.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:05.482: INFO: stderr: "" Jun 22 10:58:05.482: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 10:58:05.482: INFO: validating pod update-demo-nautilus-xtlmh Jun 22 10:58:05.485: INFO: got data: { "image": "nautilus.jpg" } Jun 22 10:58:05.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 10:58:05.485: INFO: update-demo-nautilus-xtlmh is verified up and running STEP: scaling up the replication controller Jun 22 10:58:05.492: INFO: scanned /root for discovery docs: Jun 22 10:58:05.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:06.842: INFO: stderr: "" Jun 22 10:58:06.842: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 10:58:06.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:06.987: INFO: stderr: "" Jun 22 10:58:06.987: INFO: stdout: "update-demo-nautilus-7fbpr update-demo-nautilus-xtlmh " Jun 22 10:58:06.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fbpr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:07.091: INFO: stderr: "" Jun 22 10:58:07.091: INFO: stdout: "" Jun 22 10:58:07.091: INFO: update-demo-nautilus-7fbpr is created but not running Jun 22 10:58:12.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:12.195: INFO: stderr: "" Jun 22 10:58:12.195: INFO: stdout: "update-demo-nautilus-7fbpr update-demo-nautilus-xtlmh " Jun 22 10:58:12.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fbpr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:12.289: INFO: stderr: "" Jun 22 10:58:12.289: INFO: stdout: "true" Jun 22 10:58:12.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fbpr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:12.383: INFO: stderr: "" Jun 22 10:58:12.383: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 10:58:12.383: INFO: validating pod update-demo-nautilus-7fbpr Jun 22 10:58:12.387: INFO: got data: { "image": "nautilus.jpg" } Jun 22 10:58:12.387: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 10:58:12.387: INFO: update-demo-nautilus-7fbpr is verified up and running Jun 22 10:58:12.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:12.490: INFO: stderr: "" Jun 22 10:58:12.490: INFO: stdout: "true" Jun 22 10:58:12.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtlmh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:12.641: INFO: stderr: "" Jun 22 10:58:12.641: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 10:58:12.641: INFO: validating pod update-demo-nautilus-xtlmh Jun 22 10:58:12.644: INFO: got data: { "image": "nautilus.jpg" } Jun 22 10:58:12.644: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 10:58:12.644: INFO: update-demo-nautilus-xtlmh is verified up and running STEP: using delete to clean up resources Jun 22 10:58:12.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:12.758: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 10:58:12.758: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 22 10:58:12.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xrzrg' Jun 22 10:58:13.129: INFO: stderr: "No resources found.\n" Jun 22 10:58:13.130: INFO: stdout: "" Jun 22 10:58:13.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xrzrg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 10:58:13.245: INFO: stderr: "" Jun 22 10:58:13.245: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:58:13.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xrzrg" for this suite. Jun 22 10:58:37.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:58:37.385: INFO: namespace: e2e-tests-kubectl-xrzrg, resource: bindings, ignored listing per whitelist Jun 22 10:58:37.648: INFO: namespace e2e-tests-kubectl-xrzrg deletion completed in 24.399042549s • [SLOW TEST:55.670 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:58:37.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 22 10:58:38.465: INFO: Waiting up to 5m0s for pod "pod-53910a2f-b477-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-5hjtn" to be "success or failure" Jun 22 10:58:38.757: INFO: Pod "pod-53910a2f-b477-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 292.072903ms Jun 22 10:58:40.804: INFO: Pod "pod-53910a2f-b477-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339451231s Jun 22 10:58:43.117: INFO: Pod "pod-53910a2f-b477-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.65237262s Jun 22 10:58:45.590: INFO: Pod "pod-53910a2f-b477-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.125432083s STEP: Saw pod success Jun 22 10:58:45.590: INFO: Pod "pod-53910a2f-b477-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 10:58:45.594: INFO: Trying to get logs from node hunter-worker2 pod pod-53910a2f-b477-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 10:58:45.937: INFO: Waiting for pod pod-53910a2f-b477-11ea-8cd8-0242ac11001b to disappear Jun 22 10:58:45.983: INFO: Pod pod-53910a2f-b477-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:58:45.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5hjtn" for this suite. Jun 22 10:58:52.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:58:52.247: INFO: namespace: e2e-tests-emptydir-5hjtn, resource: bindings, ignored listing per whitelist Jun 22 10:58:52.297: INFO: namespace e2e-tests-emptydir-5hjtn deletion completed in 6.309984285s • [SLOW TEST:14.649 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:58:52.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 10:58:52.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-rbqcl' Jun 22 10:58:52.476: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 10:58:52.476: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 22 10:58:52.488: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 22 10:58:52.566: INFO: scanned /root for discovery docs: Jun 22 10:58:52.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-rbqcl' Jun 22 10:59:21.601: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 22 10:59:21.601: INFO: stdout: "Created e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3\nScaling up e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 22 10:59:21.601: INFO: stdout: "Created e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3\nScaling up e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 22 10:59:21.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rbqcl' Jun 22 10:59:21.710: INFO: stderr: "" Jun 22 10:59:21.710: INFO: stdout: "e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3-6z9lw " Jun 22 10:59:21.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3-6z9lw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbqcl' Jun 22 10:59:21.880: INFO: stderr: "" Jun 22 10:59:21.880: INFO: stdout: "true" Jun 22 10:59:21.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3-6z9lw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbqcl' Jun 22 10:59:21.981: INFO: stderr: "" Jun 22 10:59:21.981: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 22 10:59:21.981: INFO: e2e-test-nginx-rc-ebb1a3b8a7ca296fe5775783993714c3-6z9lw is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jun 22 10:59:21.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rbqcl' Jun 22 10:59:22.164: INFO: stderr: "" Jun 22 10:59:22.164: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:59:22.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rbqcl" for this suite. Jun 22 10:59:44.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:59:44.697: INFO: namespace: e2e-tests-kubectl-rbqcl, resource: bindings, ignored listing per whitelist Jun 22 10:59:44.709: INFO: namespace e2e-tests-kubectl-rbqcl deletion completed in 22.339532278s • [SLOW TEST:52.412 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:59:44.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-7b24df39-b477-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 10:59:44.866: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-hf4l6" to be "success or failure" Jun 22 10:59:44.876: INFO: Pod "pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206802ms Jun 22 10:59:46.880: INFO: Pod "pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014173425s Jun 22 10:59:49.652: INFO: Pod "pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.786384723s Jun 22 10:59:51.656: INFO: Pod "pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.789901802s STEP: Saw pod success Jun 22 10:59:51.656: INFO: Pod "pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 10:59:51.660: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 22 10:59:51.894: INFO: Waiting for pod pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b to disappear Jun 22 10:59:52.051: INFO: Pod pod-projected-secrets-7b289ad0-b477-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 10:59:52.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hf4l6" for this suite. Jun 22 10:59:58.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 10:59:58.122: INFO: namespace: e2e-tests-projected-hf4l6, resource: bindings, ignored listing per whitelist Jun 22 10:59:58.153: INFO: namespace e2e-tests-projected-hf4l6 deletion completed in 6.099069377s • [SLOW TEST:13.445 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 10:59:58.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 22 10:59:58.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-djfcs' Jun 22 10:59:58.546: INFO: stderr: "" Jun 22 10:59:58.546: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 22 10:59:59.551: INFO: Selector matched 1 pods for map[app:redis] Jun 22 10:59:59.551: INFO: Found 0 / 1 Jun 22 11:00:00.551: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:00:00.551: INFO: Found 0 / 1 Jun 22 11:00:01.551: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:00:01.551: INFO: Found 0 / 1 Jun 22 11:00:02.552: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:00:02.552: INFO: Found 0 / 1 Jun 22 11:00:03.551: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:00:03.551: INFO: Found 1 / 1 Jun 22 11:00:03.551: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 22 11:00:03.554: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:00:03.554: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 11:00:03.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9vf9s --namespace=e2e-tests-kubectl-djfcs -p {"metadata":{"annotations":{"x":"y"}}}' Jun 22 11:00:03.651: INFO: stderr: "" Jun 22 11:00:03.651: INFO: stdout: "pod/redis-master-9vf9s patched\n" STEP: checking annotations Jun 22 11:00:03.658: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:00:03.658: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:00:03.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-djfcs" for this suite. Jun 22 11:00:27.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:00:27.757: INFO: namespace: e2e-tests-kubectl-djfcs, resource: bindings, ignored listing per whitelist Jun 22 11:00:27.764: INFO: namespace e2e-tests-kubectl-djfcs deletion completed in 24.103863616s • [SLOW TEST:29.611 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:00:27.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-94cc9099-b477-11ea-8cd8-0242ac11001b STEP: Creating secret with name s-test-opt-upd-94cc90fe-b477-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-94cc9099-b477-11ea-8cd8-0242ac11001b STEP: Updating secret s-test-opt-upd-94cc90fe-b477-11ea-8cd8-0242ac11001b STEP: Creating secret with name s-test-opt-create-94cc9121-b477-11ea-8cd8-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:00:38.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-h5s88" for this suite. Jun 22 11:01:02.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:01:02.064: INFO: namespace: e2e-tests-secrets-h5s88, resource: bindings, ignored listing per whitelist Jun 22 11:01:02.110: INFO: namespace e2e-tests-secrets-h5s88 deletion completed in 24.081291602s • [SLOW TEST:34.345 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:01:02.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 22 11:01:07.006: INFO: Successfully updated pod "pod-update-a9436244-b477-11ea-8cd8-0242ac11001b" STEP: verifying the updated pod is in kubernetes Jun 22 11:01:07.052: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:01:07.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wfgtg" for this suite. Jun 22 11:01:29.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:01:29.128: INFO: namespace: e2e-tests-pods-wfgtg, resource: bindings, ignored listing per whitelist Jun 22 11:01:29.211: INFO: namespace e2e-tests-pods-wfgtg deletion completed in 22.153693815s • [SLOW TEST:27.100 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:01:29.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 22 11:01:29.322: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281484,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 11:01:29.322: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281484,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 22 11:01:39.329: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281504,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 22 11:01:39.329: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281504,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 22 11:01:49.337: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281524,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 11:01:49.337: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281524,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 22 11:01:59.343: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281544,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 11:01:59.343: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-a,UID:b9672bbe-b477-11ea-99e8-0242ac110002,ResourceVersion:17281544,Generation:0,CreationTimestamp:2020-06-22 11:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 22 11:02:09.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-b,UID:d1477e00-b477-11ea-99e8-0242ac110002,ResourceVersion:17281564,Generation:0,CreationTimestamp:2020-06-22 11:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 11:02:09.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-b,UID:d1477e00-b477-11ea-99e8-0242ac110002,ResourceVersion:17281564,Generation:0,CreationTimestamp:2020-06-22 11:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 22 11:02:19.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-b,UID:d1477e00-b477-11ea-99e8-0242ac110002,ResourceVersion:17281580,Generation:0,CreationTimestamp:2020-06-22 11:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 11:02:19.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-qnt5z,SelfLink:/api/v1/namespaces/e2e-tests-watch-qnt5z/configmaps/e2e-watch-test-configmap-b,UID:d1477e00-b477-11ea-99e8-0242ac110002,ResourceVersion:17281580,Generation:0,CreationTimestamp:2020-06-22 11:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:02:29.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-qnt5z" for this suite. Jun 22 11:02:35.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:02:35.487: INFO: namespace: e2e-tests-watch-qnt5z, resource: bindings, ignored listing per whitelist Jun 22 11:02:35.492: INFO: namespace e2e-tests-watch-qnt5z deletion completed in 6.130032101s • [SLOW TEST:66.281 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:02:35.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-86gmp [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jun 22 11:02:35.656: INFO: Found 0 stateful pods, waiting for 3 Jun 22 11:02:45.662: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:02:45.662: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:02:45.662: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 22 11:02:55.662: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:02:55.662: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:02:55.662: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:02:55.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-86gmp ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:02:56.026: INFO: stderr: "I0622 11:02:55.816838 1077 log.go:172] (0xc000138630) (0xc000587400) Create stream\nI0622 11:02:55.816896 1077 log.go:172] (0xc000138630) (0xc000587400) Stream added, broadcasting: 1\nI0622 11:02:55.819270 1077 log.go:172] (0xc000138630) Reply frame received for 1\nI0622 11:02:55.819360 1077 log.go:172] (0xc000138630) (0xc000546000) Create stream\nI0622 11:02:55.819394 1077 log.go:172] (0xc000138630) (0xc000546000) Stream added, broadcasting: 3\nI0622 11:02:55.820349 1077 log.go:172] (0xc000138630) Reply frame received for 3\nI0622 11:02:55.820378 1077 log.go:172] (0xc000138630) (0xc0005874a0) Create stream\nI0622 11:02:55.820386 1077 log.go:172] (0xc000138630) (0xc0005874a0) Stream added, broadcasting: 5\nI0622 11:02:55.821284 1077 log.go:172] (0xc000138630) Reply frame received for 5\nI0622 11:02:56.017307 1077 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:02:56.017334 1077 log.go:172] (0xc000546000) (3) Data frame handling\nI0622 11:02:56.017340 1077 log.go:172] (0xc000546000) (3) Data frame sent\nI0622 11:02:56.017605 1077 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:02:56.017619 1077 log.go:172] (0xc000546000) (3) Data frame handling\nI0622 11:02:56.017839 1077 log.go:172] (0xc000138630) Data frame received for 5\nI0622 11:02:56.017870 1077 log.go:172] (0xc0005874a0) (5) Data frame handling\nI0622 11:02:56.019707 1077 log.go:172] (0xc000138630) Data frame received for 1\nI0622 11:02:56.019725 1077 log.go:172] (0xc000587400) (1) Data frame handling\nI0622 11:02:56.019744 1077 log.go:172] (0xc000587400) (1) Data frame sent\nI0622 11:02:56.019758 1077 log.go:172] (0xc000138630) (0xc000587400) Stream removed, broadcasting: 1\nI0622 11:02:56.019886 1077 log.go:172] (0xc000138630) (0xc000587400) Stream removed, broadcasting: 1\nI0622 11:02:56.019905 1077 log.go:172] (0xc000138630) (0xc000546000) Stream removed, broadcasting: 3\nI0622 11:02:56.019913 1077 log.go:172] (0xc000138630) (0xc0005874a0) Stream removed, broadcasting: 5\n" Jun 22 11:02:56.026: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:02:56.026: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 22 11:03:06.089: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 22 11:03:16.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-86gmp ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:03:16.335: INFO: stderr: "I0622 11:03:16.235569 1099 log.go:172] (0xc00086c2c0) (0xc000579400) Create stream\nI0622 11:03:16.235623 1099 log.go:172] (0xc00086c2c0) (0xc000579400) Stream added, broadcasting: 1\nI0622 11:03:16.238224 1099 log.go:172] (0xc00086c2c0) Reply frame received for 1\nI0622 11:03:16.238297 1099 log.go:172] (0xc00086c2c0) (0xc000750000) Create stream\nI0622 11:03:16.238329 1099 log.go:172] (0xc00086c2c0) (0xc000750000) Stream added, broadcasting: 3\nI0622 11:03:16.239391 1099 log.go:172] (0xc00086c2c0) Reply frame received for 3\nI0622 11:03:16.239424 1099 log.go:172] (0xc00086c2c0) (0xc0005794a0) Create stream\nI0622 11:03:16.239432 1099 log.go:172] (0xc00086c2c0) (0xc0005794a0) Stream added, broadcasting: 5\nI0622 11:03:16.240624 1099 log.go:172] (0xc00086c2c0) Reply frame received for 5\nI0622 11:03:16.327343 1099 log.go:172] (0xc00086c2c0) Data frame received for 3\nI0622 11:03:16.327388 1099 log.go:172] (0xc000750000) (3) Data frame handling\nI0622 11:03:16.327408 1099 log.go:172] (0xc000750000) (3) Data frame sent\nI0622 11:03:16.327422 1099 log.go:172] (0xc00086c2c0) Data frame received for 3\nI0622 11:03:16.327433 1099 log.go:172] (0xc00086c2c0) Data frame received for 5\nI0622 11:03:16.327449 1099 log.go:172] (0xc0005794a0) (5) Data frame handling\nI0622 11:03:16.327468 1099 log.go:172] (0xc000750000) (3) Data frame handling\nI0622 11:03:16.329067 1099 log.go:172] (0xc00086c2c0) Data frame received for 1\nI0622 11:03:16.329095 1099 log.go:172] (0xc000579400) (1) Data frame handling\nI0622 11:03:16.329255 1099 log.go:172] (0xc000579400) (1) Data frame sent\nI0622 11:03:16.329284 1099 log.go:172] (0xc00086c2c0) (0xc000579400) Stream removed, broadcasting: 1\nI0622 11:03:16.329329 1099 log.go:172] (0xc00086c2c0) Go away received\nI0622 11:03:16.329574 1099 log.go:172] (0xc00086c2c0) (0xc000579400) Stream removed, broadcasting: 1\nI0622 11:03:16.329593 1099 log.go:172] (0xc00086c2c0) (0xc000750000) Stream removed, broadcasting: 3\nI0622 11:03:16.329602 1099 log.go:172] (0xc00086c2c0) (0xc0005794a0) Stream removed, broadcasting: 5\n" Jun 22 11:03:16.335: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:03:16.335: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:03:26.356: INFO: Waiting for StatefulSet e2e-tests-statefulset-86gmp/ss2 to complete update Jun 22 11:03:26.356: INFO: Waiting for Pod e2e-tests-statefulset-86gmp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 11:03:26.356: INFO: Waiting for Pod e2e-tests-statefulset-86gmp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 11:03:36.363: INFO: Waiting for StatefulSet e2e-tests-statefulset-86gmp/ss2 to complete update Jun 22 11:03:36.363: INFO: Waiting for Pod e2e-tests-statefulset-86gmp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 22 11:03:46.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-86gmp ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:03:46.979: INFO: stderr: "I0622 11:03:46.497046 1122 log.go:172] (0xc00085a210) (0xc00084e5a0) Create stream\nI0622 11:03:46.497298 1122 log.go:172] (0xc00085a210) (0xc00084e5a0) Stream added, broadcasting: 1\nI0622 11:03:46.500346 1122 log.go:172] (0xc00085a210) Reply frame received for 1\nI0622 11:03:46.500399 1122 log.go:172] (0xc00085a210) (0xc00084e640) Create stream\nI0622 11:03:46.500426 1122 log.go:172] (0xc00085a210) (0xc00084e640) Stream added, broadcasting: 3\nI0622 11:03:46.502839 1122 log.go:172] (0xc00085a210) Reply frame received for 3\nI0622 11:03:46.502881 1122 log.go:172] (0xc00085a210) (0xc000716000) Create stream\nI0622 11:03:46.502899 1122 log.go:172] (0xc00085a210) (0xc000716000) Stream added, broadcasting: 5\nI0622 11:03:46.505031 1122 log.go:172] (0xc00085a210) Reply frame received for 5\nI0622 11:03:46.970354 1122 log.go:172] (0xc00085a210) Data frame received for 3\nI0622 11:03:46.970392 1122 log.go:172] (0xc00084e640) (3) Data frame handling\nI0622 11:03:46.970423 1122 log.go:172] (0xc00084e640) (3) Data frame sent\nI0622 11:03:46.970647 1122 log.go:172] (0xc00085a210) Data frame received for 5\nI0622 11:03:46.970688 1122 log.go:172] (0xc000716000) (5) Data frame handling\nI0622 11:03:46.970775 1122 log.go:172] (0xc00085a210) Data frame received for 3\nI0622 11:03:46.970816 1122 log.go:172] (0xc00084e640) (3) Data frame handling\nI0622 11:03:46.972709 1122 log.go:172] (0xc00085a210) Data frame received for 1\nI0622 11:03:46.972728 1122 log.go:172] (0xc00084e5a0) (1) Data frame handling\nI0622 11:03:46.972742 1122 log.go:172] (0xc00084e5a0) (1) Data frame sent\nI0622 11:03:46.972785 1122 log.go:172] (0xc00085a210) (0xc00084e5a0) Stream removed, broadcasting: 1\nI0622 11:03:46.972818 1122 log.go:172] (0xc00085a210) Go away received\nI0622 11:03:46.973396 1122 log.go:172] (0xc00085a210) (0xc00084e5a0) Stream removed, broadcasting: 1\nI0622 11:03:46.973420 1122 log.go:172] (0xc00085a210) (0xc00084e640) Stream removed, broadcasting: 3\nI0622 11:03:46.973438 1122 log.go:172] (0xc00085a210) (0xc000716000) Stream removed, broadcasting: 5\n" Jun 22 11:03:46.979: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:03:46.979: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:03:57.023: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 22 11:04:07.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-86gmp ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:04:07.277: INFO: stderr: "I0622 11:04:07.188657 1145 log.go:172] (0xc000138630) (0xc00084c640) Create stream\nI0622 11:04:07.188718 1145 log.go:172] (0xc000138630) (0xc00084c640) Stream added, broadcasting: 1\nI0622 11:04:07.191883 1145 log.go:172] (0xc000138630) Reply frame received for 1\nI0622 11:04:07.191934 1145 log.go:172] (0xc000138630) (0xc000574be0) Create stream\nI0622 11:04:07.191950 1145 log.go:172] (0xc000138630) (0xc000574be0) Stream added, broadcasting: 3\nI0622 11:04:07.193027 1145 log.go:172] (0xc000138630) Reply frame received for 3\nI0622 11:04:07.193067 1145 log.go:172] (0xc000138630) (0xc00084c6e0) Create stream\nI0622 11:04:07.193078 1145 log.go:172] (0xc000138630) (0xc00084c6e0) Stream added, broadcasting: 5\nI0622 11:04:07.194324 1145 log.go:172] (0xc000138630) Reply frame received for 5\nI0622 11:04:07.269715 1145 log.go:172] (0xc000138630) Data frame received for 5\nI0622 11:04:07.269744 1145 log.go:172] (0xc00084c6e0) (5) Data frame handling\nI0622 11:04:07.269790 1145 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:04:07.269815 1145 log.go:172] (0xc000574be0) (3) Data frame handling\nI0622 11:04:07.269841 1145 log.go:172] (0xc000574be0) (3) Data frame sent\nI0622 11:04:07.270021 1145 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:04:07.270037 1145 log.go:172] (0xc000574be0) (3) Data frame handling\nI0622 11:04:07.271398 1145 log.go:172] (0xc000138630) Data frame received for 1\nI0622 11:04:07.271423 1145 log.go:172] (0xc00084c640) (1) Data frame handling\nI0622 11:04:07.271432 1145 log.go:172] (0xc00084c640) (1) Data frame sent\nI0622 11:04:07.271444 1145 log.go:172] (0xc000138630) (0xc00084c640) Stream removed, broadcasting: 1\nI0622 11:04:07.271461 1145 log.go:172] (0xc000138630) Go away received\nI0622 11:04:07.271683 1145 log.go:172] (0xc000138630) (0xc00084c640) Stream removed, broadcasting: 1\nI0622 11:04:07.271697 1145 log.go:172] (0xc000138630) (0xc000574be0) Stream removed, broadcasting: 3\nI0622 11:04:07.271703 1145 log.go:172] (0xc000138630) (0xc00084c6e0) Stream removed, broadcasting: 5\n" Jun 22 11:04:07.277: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:04:07.277: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:04:27.296: INFO: Waiting for StatefulSet e2e-tests-statefulset-86gmp/ss2 to complete update Jun 22 11:04:27.296: INFO: Waiting for Pod e2e-tests-statefulset-86gmp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 22 11:04:37.306: INFO: Deleting all statefulset in ns e2e-tests-statefulset-86gmp Jun 22 11:04:37.309: INFO: Scaling statefulset ss2 to 0 Jun 22 11:05:07.324: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:05:07.328: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:05:07.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-86gmp" for this suite. Jun 22 11:05:13.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:05:13.428: INFO: namespace: e2e-tests-statefulset-86gmp, resource: bindings, ignored listing per whitelist Jun 22 11:05:13.465: INFO: namespace e2e-tests-statefulset-86gmp deletion completed in 6.116474002s • [SLOW TEST:157.973 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:05:13.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jun 22 11:05:13.558: INFO: Waiting up to 5m0s for pod "var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-var-expansion-9jt27" to be "success or failure" Jun 22 11:05:13.591: INFO: Pod "var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.281844ms Jun 22 11:05:15.609: INFO: Pod "var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050529133s Jun 22 11:05:17.613: INFO: Pod "var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.054472964s Jun 22 11:05:19.617: INFO: Pod "var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058920187s STEP: Saw pod success Jun 22 11:05:19.617: INFO: Pod "var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:05:19.621: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 11:05:19.639: INFO: Waiting for pod var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b to disappear Jun 22 11:05:19.655: INFO: Pod var-expansion-3f10e9bb-b478-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:05:19.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-9jt27" for this suite. Jun 22 11:05:25.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:05:25.702: INFO: namespace: e2e-tests-var-expansion-9jt27, resource: bindings, ignored listing per whitelist Jun 22 11:05:25.756: INFO: namespace e2e-tests-var-expansion-9jt27 deletion completed in 6.097365083s • [SLOW TEST:12.291 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:05:25.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0622 11:05:26.940718 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 11:05:26.940: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:05:26.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sfz7f" for this suite. Jun 22 11:05:33.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:05:33.024: INFO: namespace: e2e-tests-gc-sfz7f, resource: bindings, ignored listing per whitelist Jun 22 11:05:33.073: INFO: namespace e2e-tests-gc-sfz7f deletion completed in 6.129259915s • [SLOW TEST:7.317 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:05:33.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:05:33.208: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 22 11:05:33.215: INFO: Number of nodes with available pods: 0 Jun 22 11:05:33.215: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 22 11:05:33.249: INFO: Number of nodes with available pods: 0 Jun 22 11:05:33.249: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:34.326: INFO: Number of nodes with available pods: 0 Jun 22 11:05:34.326: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:35.302: INFO: Number of nodes with available pods: 0 Jun 22 11:05:35.302: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:36.255: INFO: Number of nodes with available pods: 0 Jun 22 11:05:36.255: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:37.253: INFO: Number of nodes with available pods: 1 Jun 22 11:05:37.253: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 22 11:05:37.290: INFO: Number of nodes with available pods: 1 Jun 22 11:05:37.290: INFO: Number of running nodes: 0, number of available pods: 1 Jun 22 11:05:38.295: INFO: Number of nodes with available pods: 0 Jun 22 11:05:38.295: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 22 11:05:38.350: INFO: Number of nodes with available pods: 0 Jun 22 11:05:38.350: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:39.353: INFO: Number of nodes with available pods: 0 Jun 22 11:05:39.353: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:40.354: INFO: Number of nodes with available pods: 0 Jun 22 11:05:40.354: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:41.354: INFO: Number of nodes with available pods: 0 Jun 22 11:05:41.354: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:42.355: INFO: Number of nodes with available pods: 0 Jun 22 11:05:42.355: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:43.354: INFO: Number of nodes with available pods: 0 Jun 22 11:05:43.354: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:44.354: INFO: Number of nodes with available pods: 0 Jun 22 11:05:44.354: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:45.354: INFO: Number of nodes with available pods: 0 Jun 22 11:05:45.354: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:46.355: INFO: Number of nodes with available pods: 0 Jun 22 11:05:46.355: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:47.565: INFO: Number of nodes with available pods: 0 Jun 22 11:05:47.565: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:48.812: INFO: Number of nodes with available pods: 0 Jun 22 11:05:48.812: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:49.355: INFO: Number of nodes with available pods: 0 Jun 22 11:05:49.355: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:50.354: INFO: Number of nodes with available pods: 0 Jun 22 11:05:50.354: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:51.357: INFO: Number of nodes with available pods: 0 Jun 22 11:05:51.357: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:52.355: INFO: Number of nodes with available pods: 0 Jun 22 11:05:52.355: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:53.356: INFO: Number of nodes with available pods: 0 Jun 22 11:05:53.356: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:54.355: INFO: Number of nodes with available pods: 0 Jun 22 11:05:54.355: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:05:55.354: INFO: Number of nodes with available pods: 1 Jun 22 11:05:55.354: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8tnh7, will wait for the garbage collector to delete the pods Jun 22 11:05:55.417: INFO: Deleting DaemonSet.extensions daemon-set took: 6.070671ms Jun 22 11:05:55.517: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.244914ms Jun 22 11:06:11.420: INFO: Number of nodes with available pods: 0 Jun 22 11:06:11.420: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 11:06:11.424: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8tnh7/daemonsets","resourceVersion":"17282490"},"items":null} Jun 22 11:06:11.426: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8tnh7/pods","resourceVersion":"17282490"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:06:11.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8tnh7" for this suite. Jun 22 11:06:17.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:06:17.517: INFO: namespace: e2e-tests-daemonsets-8tnh7, resource: bindings, ignored listing per whitelist Jun 22 11:06:17.565: INFO: namespace e2e-tests-daemonsets-8tnh7 deletion completed in 6.099528134s • [SLOW TEST:44.492 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:06:17.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:06:17.679: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:06:21.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-26wwc" for this suite. Jun 22 11:07:05.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:07:05.788: INFO: namespace: e2e-tests-pods-26wwc, resource: bindings, ignored listing per whitelist Jun 22 11:07:05.835: INFO: namespace e2e-tests-pods-26wwc deletion completed in 44.112601555s • [SLOW TEST:48.270 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:07:05.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 11:07:05.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-m5pdp' Jun 22 11:07:06.075: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 11:07:06.075: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jun 22 11:07:08.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-m5pdp' Jun 22 11:07:08.429: INFO: stderr: "" Jun 22 11:07:08.429: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:07:08.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m5pdp" for this suite. Jun 22 11:07:30.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:07:30.720: INFO: namespace: e2e-tests-kubectl-m5pdp, resource: bindings, ignored listing per whitelist Jun 22 11:07:30.743: INFO: namespace e2e-tests-kubectl-m5pdp deletion completed in 22.311666248s • [SLOW TEST:24.908 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:07:30.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-90ed1d62-b478-11ea-8cd8-0242ac11001b STEP: Creating secret with name s-test-opt-upd-90ed1dcc-b478-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-90ed1d62-b478-11ea-8cd8-0242ac11001b STEP: Updating secret s-test-opt-upd-90ed1dcc-b478-11ea-8cd8-0242ac11001b STEP: Creating secret with name s-test-opt-create-90ed1e07-b478-11ea-8cd8-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:07:39.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4v79v" for this suite. Jun 22 11:08:01.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:08:01.150: INFO: namespace: e2e-tests-projected-4v79v, resource: bindings, ignored listing per whitelist Jun 22 11:08:01.160: INFO: namespace e2e-tests-projected-4v79v deletion completed in 22.137226529s • [SLOW TEST:30.416 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:08:01.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 22 11:08:09.626: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:09.643: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 11:08:11.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:11.647: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 11:08:13.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:13.647: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 11:08:15.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:15.647: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 11:08:17.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:17.655: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 11:08:19.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:19.647: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 11:08:21.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 11:08:21.647: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:08:21.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-x45hh" for this suite. Jun 22 11:08:43.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:08:43.730: INFO: namespace: e2e-tests-container-lifecycle-hook-x45hh, resource: bindings, ignored listing per whitelist Jun 22 11:08:43.769: INFO: namespace e2e-tests-container-lifecycle-hook-x45hh deletion completed in 22.110384201s • [SLOW TEST:42.609 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:08:43.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9q97m Jun 22 11:08:47.902: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9q97m STEP: checking the pod's current state and verifying that restartCount is present Jun 22 11:08:47.905: INFO: Initial restart count of pod liveness-http is 0 Jun 22 11:09:10.718: INFO: Restart count of pod e2e-tests-container-probe-9q97m/liveness-http is now 1 (22.812767603s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:09:10.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9q97m" for this suite. Jun 22 11:09:16.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:09:16.804: INFO: namespace: e2e-tests-container-probe-9q97m, resource: bindings, ignored listing per whitelist Jun 22 11:09:16.859: INFO: namespace e2e-tests-container-probe-9q97m deletion completed in 6.101256041s • [SLOW TEST:33.090 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:09:16.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-d028ae5b-b478-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:09:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2whcf" for this suite. Jun 22 11:09:45.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:09:45.079: INFO: namespace: e2e-tests-configmap-2whcf, resource: bindings, ignored listing per whitelist Jun 22 11:09:45.096: INFO: namespace e2e-tests-configmap-2whcf deletion completed in 22.099958356s • [SLOW TEST:28.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:09:45.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 22 11:09:45.779: INFO: Pod name wrapped-volume-race-e150abbd-b478-11ea-8cd8-0242ac11001b: Found 0 pods out of 5 Jun 22 11:09:50.788: INFO: Pod name wrapped-volume-race-e150abbd-b478-11ea-8cd8-0242ac11001b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e150abbd-b478-11ea-8cd8-0242ac11001b in namespace e2e-tests-emptydir-wrapper-t2prt, will wait for the garbage collector to delete the pods Jun 22 11:11:52.872: INFO: Deleting ReplicationController wrapped-volume-race-e150abbd-b478-11ea-8cd8-0242ac11001b took: 6.541427ms Jun 22 11:11:52.972: INFO: Terminating ReplicationController wrapped-volume-race-e150abbd-b478-11ea-8cd8-0242ac11001b pods took: 100.282826ms STEP: Creating RC which spawns configmap-volume pods Jun 22 11:12:30.060: INFO: Pod name wrapped-volume-race-4333c2b1-b479-11ea-8cd8-0242ac11001b: Found 0 pods out of 5 Jun 22 11:12:35.094: INFO: Pod name wrapped-volume-race-4333c2b1-b479-11ea-8cd8-0242ac11001b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4333c2b1-b479-11ea-8cd8-0242ac11001b in namespace e2e-tests-emptydir-wrapper-t2prt, will wait for the garbage collector to delete the pods Jun 22 11:15:11.183: INFO: Deleting ReplicationController wrapped-volume-race-4333c2b1-b479-11ea-8cd8-0242ac11001b took: 15.915835ms Jun 22 11:15:11.383: INFO: Terminating ReplicationController wrapped-volume-race-4333c2b1-b479-11ea-8cd8-0242ac11001b pods took: 200.259163ms STEP: Creating RC which spawns configmap-volume pods Jun 22 11:15:51.642: INFO: Pod name wrapped-volume-race-bb5f2341-b479-11ea-8cd8-0242ac11001b: Found 0 pods out of 5 Jun 22 11:15:56.655: INFO: Pod name wrapped-volume-race-bb5f2341-b479-11ea-8cd8-0242ac11001b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bb5f2341-b479-11ea-8cd8-0242ac11001b in namespace e2e-tests-emptydir-wrapper-t2prt, will wait for the garbage collector to delete the pods Jun 22 11:18:00.735: INFO: Deleting ReplicationController wrapped-volume-race-bb5f2341-b479-11ea-8cd8-0242ac11001b took: 6.824875ms Jun 22 11:18:00.835: INFO: Terminating ReplicationController wrapped-volume-race-bb5f2341-b479-11ea-8cd8-0242ac11001b pods took: 100.297245ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:18:42.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-t2prt" for this suite. Jun 22 11:18:50.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:18:51.036: INFO: namespace: e2e-tests-emptydir-wrapper-t2prt, resource: bindings, ignored listing per whitelist Jun 22 11:18:51.059: INFO: namespace e2e-tests-emptydir-wrapper-t2prt deletion completed in 8.093155431s • [SLOW TEST:545.963 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:18:51.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:18:51.173: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:18:52.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-n6crc" for this suite. Jun 22 11:18:58.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:18:58.362: INFO: namespace: e2e-tests-custom-resource-definition-n6crc, resource: bindings, ignored listing per whitelist Jun 22 11:18:58.380: INFO: namespace e2e-tests-custom-resource-definition-n6crc deletion completed in 6.10463979s • [SLOW TEST:7.320 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:18:58.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-fx4c STEP: Creating a pod to test atomic-volume-subpath Jun 22 11:18:58.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fx4c" in namespace "e2e-tests-subpath-c8pz2" to be "success or failure" Jun 22 11:18:58.565: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.016927ms Jun 22 11:19:00.697: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139852966s Jun 22 11:19:02.701: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143868826s Jun 22 11:19:04.706: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=true. Elapsed: 6.148452088s Jun 22 11:19:06.711: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 8.153067051s Jun 22 11:19:08.715: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 10.157707736s Jun 22 11:19:10.720: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 12.162250247s Jun 22 11:19:12.724: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 14.16674741s Jun 22 11:19:14.729: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 16.171320518s Jun 22 11:19:16.734: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 18.176093043s Jun 22 11:19:18.738: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 20.180475157s Jun 22 11:19:20.743: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 22.185247939s Jun 22 11:19:22.748: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Running", Reason="", readiness=false. Elapsed: 24.189975053s Jun 22 11:19:24.752: INFO: Pod "pod-subpath-test-configmap-fx4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.194038282s STEP: Saw pod success Jun 22 11:19:24.752: INFO: Pod "pod-subpath-test-configmap-fx4c" satisfied condition "success or failure" Jun 22 11:19:24.754: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-fx4c container test-container-subpath-configmap-fx4c: STEP: delete the pod Jun 22 11:19:24.785: INFO: Waiting for pod pod-subpath-test-configmap-fx4c to disappear Jun 22 11:19:24.794: INFO: Pod pod-subpath-test-configmap-fx4c no longer exists STEP: Deleting pod pod-subpath-test-configmap-fx4c Jun 22 11:19:24.794: INFO: Deleting pod "pod-subpath-test-configmap-fx4c" in namespace "e2e-tests-subpath-c8pz2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:19:24.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c8pz2" for this suite. Jun 22 11:19:30.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:19:30.902: INFO: namespace: e2e-tests-subpath-c8pz2, resource: bindings, ignored listing per whitelist Jun 22 11:19:30.911: INFO: namespace e2e-tests-subpath-c8pz2 deletion completed in 6.112454212s • [SLOW TEST:32.531 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:19:30.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-3e3d21eb-b47a-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:19:31.179: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-g8n2l" to be "success or failure" Jun 22 11:19:31.332: INFO: Pod "pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 153.328665ms Jun 22 11:19:33.338: INFO: Pod "pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158703165s Jun 22 11:19:35.341: INFO: Pod "pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162421315s STEP: Saw pod success Jun 22 11:19:35.341: INFO: Pod "pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:19:35.345: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 11:19:35.465: INFO: Waiting for pod pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:19:35.470: INFO: Pod pod-projected-configmaps-3e3ec6c1-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:19:35.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g8n2l" for this suite. Jun 22 11:19:41.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:19:41.537: INFO: namespace: e2e-tests-projected-g8n2l, resource: bindings, ignored listing per whitelist Jun 22 11:19:41.577: INFO: namespace e2e-tests-projected-g8n2l deletion completed in 6.088822445s • [SLOW TEST:10.666 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:19:41.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 22 11:19:42.054: INFO: Waiting up to 5m0s for pod "pod-44bb242e-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-kmxb4" to be "success or failure" Jun 22 11:19:42.076: INFO: Pod "pod-44bb242e-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.349575ms Jun 22 11:19:44.081: INFO: Pod "pod-44bb242e-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026568045s Jun 22 11:19:46.085: INFO: Pod "pod-44bb242e-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030591708s STEP: Saw pod success Jun 22 11:19:46.085: INFO: Pod "pod-44bb242e-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:19:46.088: INFO: Trying to get logs from node hunter-worker pod pod-44bb242e-b47a-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:19:46.126: INFO: Waiting for pod pod-44bb242e-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:19:46.130: INFO: Pod pod-44bb242e-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:19:46.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kmxb4" for this suite. Jun 22 11:19:52.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:19:52.184: INFO: namespace: e2e-tests-emptydir-kmxb4, resource: bindings, ignored listing per whitelist Jun 22 11:19:52.253: INFO: namespace e2e-tests-emptydir-kmxb4 deletion completed in 6.118499191s • [SLOW TEST:10.675 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:19:52.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 11:19:52.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-m2s5x" to be "success or failure" Jun 22 11:19:52.364: INFO: Pod "downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.351197ms Jun 22 11:19:54.376: INFO: Pod "downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015722578s Jun 22 11:19:56.381: INFO: Pod "downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020393033s STEP: Saw pod success Jun 22 11:19:56.381: INFO: Pod "downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:19:56.384: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 11:19:56.407: INFO: Waiting for pod downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:19:56.412: INFO: Pod downwardapi-volume-4ae0b383-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:19:56.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m2s5x" for this suite. Jun 22 11:20:02.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:20:02.456: INFO: namespace: e2e-tests-projected-m2s5x, resource: bindings, ignored listing per whitelist Jun 22 11:20:02.512: INFO: namespace e2e-tests-projected-m2s5x deletion completed in 6.097324121s • [SLOW TEST:10.259 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:20:02.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 22 11:20:02.653: INFO: Waiting up to 5m0s for pod "pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-l6vwq" to be "success or failure" Jun 22 11:20:02.662: INFO: Pod "pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750209ms Jun 22 11:20:04.728: INFO: Pod "pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074790238s Jun 22 11:20:06.734: INFO: Pod "pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080309249s STEP: Saw pod success Jun 22 11:20:06.734: INFO: Pod "pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:20:06.736: INFO: Trying to get logs from node hunter-worker2 pod pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:20:06.790: INFO: Waiting for pod pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:20:06.800: INFO: Pod pod-50ffeca3-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:20:06.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l6vwq" for this suite. Jun 22 11:20:12.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:20:12.906: INFO: namespace: e2e-tests-emptydir-l6vwq, resource: bindings, ignored listing per whitelist Jun 22 11:20:12.910: INFO: namespace e2e-tests-emptydir-l6vwq deletion completed in 6.106049191s • [SLOW TEST:10.397 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:20:12.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jun 22 11:20:13.053: INFO: Waiting up to 5m0s for pod "client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-containers-8c49p" to be "success or failure" Jun 22 11:20:13.065: INFO: Pod "client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.319268ms Jun 22 11:20:15.069: INFO: Pod "client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015292109s Jun 22 11:20:17.072: INFO: Pod "client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019024733s STEP: Saw pod success Jun 22 11:20:17.072: INFO: Pod "client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:20:17.075: INFO: Trying to get logs from node hunter-worker pod client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:20:17.111: INFO: Waiting for pod client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:20:17.119: INFO: Pod client-containers-5732a898-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:20:17.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8c49p" for this suite. Jun 22 11:20:23.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:20:23.195: INFO: namespace: e2e-tests-containers-8c49p, resource: bindings, ignored listing per whitelist Jun 22 11:20:23.219: INFO: namespace e2e-tests-containers-8c49p deletion completed in 6.096432859s • [SLOW TEST:10.308 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:20:23.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:20:23.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 22 11:20:23.478: INFO: stderr: "" Jun 22 11:20:23.478: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:07:46Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:20:23.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tjdqw" for this suite. Jun 22 11:20:29.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:20:29.579: INFO: namespace: e2e-tests-kubectl-tjdqw, resource: bindings, ignored listing per whitelist Jun 22 11:20:29.593: INFO: namespace e2e-tests-kubectl-tjdqw deletion completed in 6.110779703s • [SLOW TEST:6.373 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:20:29.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-87md7 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 11:20:29.700: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 11:20:53.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.48:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-87md7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 11:20:53.806: INFO: >>> kubeConfig: /root/.kube/config I0622 11:20:53.839122 7 log.go:172] (0xc001978370) (0xc001d41ae0) Create stream I0622 11:20:53.839180 7 log.go:172] (0xc001978370) (0xc001d41ae0) Stream added, broadcasting: 1 I0622 11:20:54.389749 7 log.go:172] (0xc001978370) Reply frame received for 1 I0622 11:20:54.389812 7 log.go:172] (0xc001978370) (0xc0023837c0) Create stream I0622 11:20:54.389829 7 log.go:172] (0xc001978370) (0xc0023837c0) Stream added, broadcasting: 3 I0622 11:20:54.390858 7 log.go:172] (0xc001978370) Reply frame received for 3 I0622 11:20:54.390891 7 log.go:172] (0xc001978370) (0xc001d41b80) Create stream I0622 11:20:54.390904 7 log.go:172] (0xc001978370) (0xc001d41b80) Stream added, broadcasting: 5 I0622 11:20:54.391821 7 log.go:172] (0xc001978370) Reply frame received for 5 I0622 11:20:54.563667 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 11:20:54.563689 7 log.go:172] (0xc0023837c0) (3) Data frame handling I0622 11:20:54.563701 7 log.go:172] (0xc0023837c0) (3) Data frame sent I0622 11:20:54.563712 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 11:20:54.563716 7 log.go:172] (0xc0023837c0) (3) Data frame handling I0622 11:20:54.563737 7 log.go:172] (0xc001978370) Data frame received for 5 I0622 11:20:54.563752 7 log.go:172] (0xc001d41b80) (5) Data frame handling I0622 11:20:54.566091 7 log.go:172] (0xc001978370) Data frame received for 1 I0622 11:20:54.566105 7 log.go:172] (0xc001d41ae0) (1) Data frame handling I0622 11:20:54.566118 7 log.go:172] (0xc001d41ae0) (1) Data frame sent I0622 11:20:54.566373 7 log.go:172] (0xc001978370) (0xc001d41ae0) Stream removed, broadcasting: 1 I0622 11:20:54.566430 7 log.go:172] (0xc001978370) Go away received I0622 11:20:54.566571 7 log.go:172] (0xc001978370) (0xc001d41ae0) Stream removed, broadcasting: 1 I0622 11:20:54.566605 7 log.go:172] (0xc001978370) (0xc0023837c0) Stream removed, broadcasting: 3 I0622 11:20:54.566633 7 log.go:172] (0xc001978370) (0xc001d41b80) Stream removed, broadcasting: 5 Jun 22 11:20:54.566: INFO: Found all expected endpoints: [netserver-0] Jun 22 11:20:54.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.79:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-87md7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 11:20:54.570: INFO: >>> kubeConfig: /root/.kube/config I0622 11:20:54.608589 7 log.go:172] (0xc000e12790) (0xc002383ae0) Create stream I0622 11:20:54.608610 7 log.go:172] (0xc000e12790) (0xc002383ae0) Stream added, broadcasting: 1 I0622 11:20:54.611254 7 log.go:172] (0xc000e12790) Reply frame received for 1 I0622 11:20:54.611327 7 log.go:172] (0xc000e12790) (0xc001d41c20) Create stream I0622 11:20:54.611357 7 log.go:172] (0xc000e12790) (0xc001d41c20) Stream added, broadcasting: 3 I0622 11:20:54.612256 7 log.go:172] (0xc000e12790) Reply frame received for 3 I0622 11:20:54.612315 7 log.go:172] (0xc000e12790) (0xc0022a9720) Create stream I0622 11:20:54.612332 7 log.go:172] (0xc000e12790) (0xc0022a9720) Stream added, broadcasting: 5 I0622 11:20:54.613369 7 log.go:172] (0xc000e12790) Reply frame received for 5 I0622 11:20:54.677981 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 11:20:54.678086 7 log.go:172] (0xc001d41c20) (3) Data frame handling I0622 11:20:54.678175 7 log.go:172] (0xc001d41c20) (3) Data frame sent I0622 11:20:54.678205 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 11:20:54.678227 7 log.go:172] (0xc001d41c20) (3) Data frame handling I0622 11:20:54.678427 7 log.go:172] (0xc000e12790) Data frame received for 5 I0622 11:20:54.678461 7 log.go:172] (0xc0022a9720) (5) Data frame handling I0622 11:20:54.680121 7 log.go:172] (0xc000e12790) Data frame received for 1 I0622 11:20:54.680142 7 log.go:172] (0xc002383ae0) (1) Data frame handling I0622 11:20:54.680160 7 log.go:172] (0xc002383ae0) (1) Data frame sent I0622 11:20:54.680174 7 log.go:172] (0xc000e12790) (0xc002383ae0) Stream removed, broadcasting: 1 I0622 11:20:54.680277 7 log.go:172] (0xc000e12790) (0xc002383ae0) Stream removed, broadcasting: 1 I0622 11:20:54.680293 7 log.go:172] (0xc000e12790) (0xc001d41c20) Stream removed, broadcasting: 3 I0622 11:20:54.680304 7 log.go:172] (0xc000e12790) (0xc0022a9720) Stream removed, broadcasting: 5 I0622 11:20:54.680327 7 log.go:172] (0xc000e12790) Go away received Jun 22 11:20:54.680: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:20:54.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-87md7" for this suite. Jun 22 11:21:18.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:21:18.733: INFO: namespace: e2e-tests-pod-network-test-87md7, resource: bindings, ignored listing per whitelist Jun 22 11:21:18.766: INFO: namespace e2e-tests-pod-network-test-87md7 deletion completed in 24.082047914s • [SLOW TEST:49.173 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:21:18.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 22 11:21:18.834: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 11:21:18.848: INFO: Waiting for terminating namespaces to be deleted... Jun 22 11:21:18.850: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 22 11:21:18.855: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 22 11:21:18.855: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 11:21:18.855: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:21:18.855: INFO: Container kindnet-cni ready: true, restart count 0 Jun 22 11:21:18.855: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 22 11:21:18.855: INFO: Container coredns ready: true, restart count 0 Jun 22 11:21:18.855: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 22 11:21:18.860: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:21:18.860: INFO: Container kindnet-cni ready: true, restart count 0 Jun 22 11:21:18.860: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 22 11:21:18.860: INFO: Container coredns ready: true, restart count 0 Jun 22 11:21:18.860: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:21:18.860: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-80e32dde-b47a-11ea-8cd8-0242ac11001b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-80e32dde-b47a-11ea-8cd8-0242ac11001b off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-80e32dde-b47a-11ea-8cd8-0242ac11001b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:21:27.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-ltdnn" for this suite. Jun 22 11:21:35.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:21:35.154: INFO: namespace: e2e-tests-sched-pred-ltdnn, resource: bindings, ignored listing per whitelist Jun 22 11:21:35.179: INFO: namespace e2e-tests-sched-pred-ltdnn deletion completed in 8.093357332s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.412 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:21:35.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0622 11:22:05.827041 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 11:22:05.827: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:22:05.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nsc2m" for this suite. Jun 22 11:22:13.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:22:13.954: INFO: namespace: e2e-tests-gc-nsc2m, resource: bindings, ignored listing per whitelist Jun 22 11:22:13.956: INFO: namespace e2e-tests-gc-nsc2m deletion completed in 8.12588646s • [SLOW TEST:38.777 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:22:13.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-9f54cfeb-b47a-11ea-8cd8-0242ac11001b STEP: Creating secret with name secret-projected-all-test-volume-9f54cfd8-b47a-11ea-8cd8-0242ac11001b STEP: Creating a pod to test Check all projections for projected volume plugin Jun 22 11:22:14.062: INFO: Waiting up to 5m0s for pod "projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-fhtdm" to be "success or failure" Jun 22 11:22:14.078: INFO: Pod "projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.562289ms Jun 22 11:22:16.082: INFO: Pod "projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01914986s Jun 22 11:22:18.085: INFO: Pod "projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022694473s STEP: Saw pod success Jun 22 11:22:18.085: INFO: Pod "projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:22:18.088: INFO: Trying to get logs from node hunter-worker pod projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b container projected-all-volume-test: STEP: delete the pod Jun 22 11:22:18.140: INFO: Waiting for pod projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:22:18.152: INFO: Pod projected-volume-9f54cf94-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:22:18.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fhtdm" for this suite. Jun 22 11:22:24.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:22:24.224: INFO: namespace: e2e-tests-projected-fhtdm, resource: bindings, ignored listing per whitelist Jun 22 11:22:24.248: INFO: namespace e2e-tests-projected-fhtdm deletion completed in 6.092447622s • [SLOW TEST:10.291 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:22:24.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:22:30.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-v55jq" for this suite. Jun 22 11:22:36.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:22:36.672: INFO: namespace: e2e-tests-namespaces-v55jq, resource: bindings, ignored listing per whitelist Jun 22 11:22:36.678: INFO: namespace e2e-tests-namespaces-v55jq deletion completed in 6.146618047s STEP: Destroying namespace "e2e-tests-nsdeletetest-kj6bx" for this suite. Jun 22 11:22:36.681: INFO: Namespace e2e-tests-nsdeletetest-kj6bx was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-ml5x6" for this suite. Jun 22 11:22:42.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:22:42.729: INFO: namespace: e2e-tests-nsdeletetest-ml5x6, resource: bindings, ignored listing per whitelist Jun 22 11:22:42.770: INFO: namespace e2e-tests-nsdeletetest-ml5x6 deletion completed in 6.089031471s • [SLOW TEST:18.522 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:22:42.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jun 22 11:22:42.894: INFO: Waiting up to 5m0s for pod "var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-var-expansion-g7gnt" to be "success or failure" Jun 22 11:22:42.907: INFO: Pod "var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.545412ms Jun 22 11:22:44.913: INFO: Pod "var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01983601s Jun 22 11:22:46.917: INFO: Pod "var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023216223s STEP: Saw pod success Jun 22 11:22:46.917: INFO: Pod "var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:22:46.919: INFO: Trying to get logs from node hunter-worker pod var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 11:22:46.952: INFO: Waiting for pod var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b to disappear Jun 22 11:22:46.966: INFO: Pod var-expansion-b0872a20-b47a-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:22:46.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-g7gnt" for this suite. Jun 22 11:22:52.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:22:53.032: INFO: namespace: e2e-tests-var-expansion-g7gnt, resource: bindings, ignored listing per whitelist Jun 22 11:22:53.065: INFO: namespace e2e-tests-var-expansion-g7gnt deletion completed in 6.094869866s • [SLOW TEST:10.296 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:22:53.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 22 11:23:01.231: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:01.236: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 11:23:03.236: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:03.240: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 11:23:05.237: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:05.240: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 11:23:07.236: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:07.241: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 11:23:09.237: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:09.242: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 11:23:11.237: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:11.242: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 11:23:13.237: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 11:23:13.241: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:23:13.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-d28v6" for this suite. Jun 22 11:23:35.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:23:35.275: INFO: namespace: e2e-tests-container-lifecycle-hook-d28v6, resource: bindings, ignored listing per whitelist Jun 22 11:23:35.337: INFO: namespace e2e-tests-container-lifecycle-hook-d28v6 deletion completed in 22.092187998s • [SLOW TEST:42.271 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:23:35.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:23:41.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-djrcl" for this suite. Jun 22 11:23:47.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:23:47.827: INFO: namespace: e2e-tests-emptydir-wrapper-djrcl, resource: bindings, ignored listing per whitelist Jun 22 11:23:47.864: INFO: namespace e2e-tests-emptydir-wrapper-djrcl deletion completed in 6.074594838s • [SLOW TEST:12.527 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:23:47.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0622 11:24:00.404939 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 11:24:00.405: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:24:00.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jb44g" for this suite. Jun 22 11:24:08.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:24:08.577: INFO: namespace: e2e-tests-gc-jb44g, resource: bindings, ignored listing per whitelist Jun 22 11:24:08.627: INFO: namespace e2e-tests-gc-jb44g deletion completed in 8.218207199s • [SLOW TEST:20.762 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:24:08.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jun 22 11:24:09.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hg87m' Jun 22 11:24:12.839: INFO: stderr: "" Jun 22 11:24:12.839: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jun 22 11:24:13.844: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:24:13.844: INFO: Found 0 / 1 Jun 22 11:24:14.863: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:24:14.863: INFO: Found 0 / 1 Jun 22 11:24:15.844: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:24:15.844: INFO: Found 0 / 1 Jun 22 11:24:16.844: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:24:16.844: INFO: Found 0 / 1 Jun 22 11:24:17.844: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:24:17.844: INFO: Found 1 / 1 Jun 22 11:24:17.844: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 11:24:17.847: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:24:17.847: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 22 11:24:17.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xmvmn redis-master --namespace=e2e-tests-kubectl-hg87m' Jun 22 11:24:18.001: INFO: stderr: "" Jun 22 11:24:18.001: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jun 11:24:16.299 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jun 11:24:16.302 # Server started, Redis version 3.2.12\n1:M 22 Jun 11:24:16.302 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jun 11:24:16.302 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 22 11:24:18.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xmvmn redis-master --namespace=e2e-tests-kubectl-hg87m --tail=1' Jun 22 11:24:18.135: INFO: stderr: "" Jun 22 11:24:18.135: INFO: stdout: "1:M 22 Jun 11:24:16.302 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 22 11:24:18.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xmvmn redis-master --namespace=e2e-tests-kubectl-hg87m --limit-bytes=1' Jun 22 11:24:18.241: INFO: stderr: "" Jun 22 11:24:18.241: INFO: stdout: " " STEP: exposing timestamps Jun 22 11:24:18.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xmvmn redis-master --namespace=e2e-tests-kubectl-hg87m --tail=1 --timestamps' Jun 22 11:24:18.478: INFO: stderr: "" Jun 22 11:24:18.478: INFO: stdout: "2020-06-22T11:24:16.32684628Z 1:M 22 Jun 11:24:16.302 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 22 11:24:20.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xmvmn redis-master --namespace=e2e-tests-kubectl-hg87m --since=1s' Jun 22 11:24:21.098: INFO: stderr: "" Jun 22 11:24:21.098: INFO: stdout: "" Jun 22 11:24:21.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xmvmn redis-master --namespace=e2e-tests-kubectl-hg87m --since=24h' Jun 22 11:24:21.235: INFO: stderr: "" Jun 22 11:24:21.235: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jun 11:24:16.299 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jun 11:24:16.302 # Server started, Redis version 3.2.12\n1:M 22 Jun 11:24:16.302 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jun 11:24:16.302 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jun 22 11:24:21.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hg87m' Jun 22 11:24:21.403: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:24:21.403: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 22 11:24:21.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-hg87m' Jun 22 11:24:21.545: INFO: stderr: "No resources found.\n" Jun 22 11:24:21.545: INFO: stdout: "" Jun 22 11:24:21.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-hg87m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 11:24:21.853: INFO: stderr: "" Jun 22 11:24:21.853: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:24:21.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hg87m" for this suite. Jun 22 11:24:27.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:24:28.008: INFO: namespace: e2e-tests-kubectl-hg87m, resource: bindings, ignored listing per whitelist Jun 22 11:24:28.020: INFO: namespace e2e-tests-kubectl-hg87m deletion completed in 6.163015728s • [SLOW TEST:19.393 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:24:28.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jun 22 11:24:28.119: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-wsx9m" to be "success or failure" Jun 22 11:24:28.163: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 43.435877ms Jun 22 11:24:30.166: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047304838s Jun 22 11:24:32.170: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051234173s Jun 22 11:24:34.175: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055563709s STEP: Saw pod success Jun 22 11:24:34.175: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 22 11:24:34.178: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 22 11:24:34.215: INFO: Waiting for pod pod-host-path-test to disappear Jun 22 11:24:34.233: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:24:34.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-wsx9m" for this suite. Jun 22 11:24:42.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:24:42.277: INFO: namespace: e2e-tests-hostpath-wsx9m, resource: bindings, ignored listing per whitelist Jun 22 11:24:42.317: INFO: namespace e2e-tests-hostpath-wsx9m deletion completed in 8.081195523s • [SLOW TEST:14.297 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:24:42.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-wmr6p STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-wmr6p STEP: Deleting pre-stop pod Jun 22 11:24:58.297: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:24:58.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-wmr6p" for this suite. Jun 22 11:25:36.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:25:36.372: INFO: namespace: e2e-tests-prestop-wmr6p, resource: bindings, ignored listing per whitelist Jun 22 11:25:36.400: INFO: namespace e2e-tests-prestop-wmr6p deletion completed in 38.088266565s • [SLOW TEST:54.083 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:25:36.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:25:40.613: INFO: Waiting up to 5m0s for pod "client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-pods-l2q4c" to be "success or failure" Jun 22 11:25:40.685: INFO: Pod "client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 72.229383ms Jun 22 11:25:42.712: INFO: Pod "client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098941704s Jun 22 11:25:44.991: INFO: Pod "client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.377952573s Jun 22 11:25:46.995: INFO: Pod "client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.38238114s STEP: Saw pod success Jun 22 11:25:46.995: INFO: Pod "client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:25:46.998: INFO: Trying to get logs from node hunter-worker pod client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b container env3cont: STEP: delete the pod Jun 22 11:25:47.030: INFO: Waiting for pod client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b to disappear Jun 22 11:25:47.071: INFO: Pod client-envvars-1a7323f0-b47b-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:25:47.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-l2q4c" for this suite. Jun 22 11:26:37.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:26:37.244: INFO: namespace: e2e-tests-pods-l2q4c, resource: bindings, ignored listing per whitelist Jun 22 11:26:37.289: INFO: namespace e2e-tests-pods-l2q4c deletion completed in 50.112248914s • [SLOW TEST:60.889 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:26:37.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 11:26:37.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-jt7qc' Jun 22 11:26:37.545: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 11:26:37.546: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jun 22 11:26:41.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-jt7qc' Jun 22 11:26:41.720: INFO: stderr: "" Jun 22 11:26:41.720: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:26:41.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jt7qc" for this suite. Jun 22 11:27:03.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:27:03.788: INFO: namespace: e2e-tests-kubectl-jt7qc, resource: bindings, ignored listing per whitelist Jun 22 11:27:03.798: INFO: namespace e2e-tests-kubectl-jt7qc deletion completed in 22.075032663s • [SLOW TEST:26.508 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:27:03.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-n8cd4 Jun 22 11:27:07.941: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-n8cd4 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 11:27:07.944: INFO: Initial restart count of pod liveness-exec is 0 Jun 22 11:27:56.057: INFO: Restart count of pod e2e-tests-container-probe-n8cd4/liveness-exec is now 1 (48.113174619s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:27:56.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n8cd4" for this suite. Jun 22 11:28:02.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:28:02.123: INFO: namespace: e2e-tests-container-probe-n8cd4, resource: bindings, ignored listing per whitelist Jun 22 11:28:02.187: INFO: namespace e2e-tests-container-probe-n8cd4 deletion completed in 6.111722052s • [SLOW TEST:58.389 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:28:02.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-6ee59e7a-b47b-11ea-8cd8-0242ac11001b STEP: Creating configMap with name cm-test-opt-upd-6ee59f17-b47b-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6ee59e7a-b47b-11ea-8cd8-0242ac11001b STEP: Updating configmap cm-test-opt-upd-6ee59f17-b47b-11ea-8cd8-0242ac11001b STEP: Creating configMap with name cm-test-opt-create-6ee59f5e-b47b-11ea-8cd8-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:28:10.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zv52n" for this suite. Jun 22 11:28:32.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:28:32.446: INFO: namespace: e2e-tests-projected-zv52n, resource: bindings, ignored listing per whitelist Jun 22 11:28:32.512: INFO: namespace e2e-tests-projected-zv52n deletion completed in 22.110085991s • [SLOW TEST:30.325 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:28:32.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 22 11:28:32.604: INFO: Waiting up to 5m0s for pod "pod-80f7c649-b47b-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-bxxhg" to be "success or failure" Jun 22 11:28:32.651: INFO: Pod "pod-80f7c649-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.860626ms Jun 22 11:28:34.655: INFO: Pod "pod-80f7c649-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050727655s Jun 22 11:28:36.660: INFO: Pod "pod-80f7c649-b47b-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055083143s STEP: Saw pod success Jun 22 11:28:36.660: INFO: Pod "pod-80f7c649-b47b-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:28:36.662: INFO: Trying to get logs from node hunter-worker2 pod pod-80f7c649-b47b-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:28:36.697: INFO: Waiting for pod pod-80f7c649-b47b-11ea-8cd8-0242ac11001b to disappear Jun 22 11:28:36.714: INFO: Pod pod-80f7c649-b47b-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:28:36.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bxxhg" for this suite. Jun 22 11:28:42.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:28:42.800: INFO: namespace: e2e-tests-emptydir-bxxhg, resource: bindings, ignored listing per whitelist Jun 22 11:28:42.811: INFO: namespace e2e-tests-emptydir-bxxhg deletion completed in 6.093937868s • [SLOW TEST:10.298 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:28:42.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-wssqb.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-wssqb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wssqb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-wssqb.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-wssqb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wssqb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 11:28:51.161: INFO: DNS probes using e2e-tests-dns-wssqb/dns-test-8724d8b8-b47b-11ea-8cd8-0242ac11001b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:28:51.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wssqb" for this suite. Jun 22 11:28:57.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:28:57.377: INFO: namespace: e2e-tests-dns-wssqb, resource: bindings, ignored listing per whitelist Jun 22 11:28:57.389: INFO: namespace e2e-tests-dns-wssqb deletion completed in 6.14464522s • [SLOW TEST:14.578 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:28:57.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 22 11:28:57.491: INFO: Waiting up to 5m0s for pod "pod-8fce483a-b47b-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-j2qqq" to be "success or failure" Jun 22 11:28:57.496: INFO: Pod "pod-8fce483a-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.779067ms Jun 22 11:28:59.499: INFO: Pod "pod-8fce483a-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008181841s Jun 22 11:29:01.504: INFO: Pod "pod-8fce483a-b47b-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012776332s STEP: Saw pod success Jun 22 11:29:01.504: INFO: Pod "pod-8fce483a-b47b-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:29:01.507: INFO: Trying to get logs from node hunter-worker2 pod pod-8fce483a-b47b-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:29:01.570: INFO: Waiting for pod pod-8fce483a-b47b-11ea-8cd8-0242ac11001b to disappear Jun 22 11:29:01.582: INFO: Pod pod-8fce483a-b47b-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:29:01.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j2qqq" for this suite. Jun 22 11:29:07.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:29:07.628: INFO: namespace: e2e-tests-emptydir-j2qqq, resource: bindings, ignored listing per whitelist Jun 22 11:29:07.663: INFO: namespace e2e-tests-emptydir-j2qqq deletion completed in 6.077854586s • [SLOW TEST:10.274 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:29:07.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 11:29:07.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-wnf4j" to be "success or failure" Jun 22 11:29:07.768: INFO: Pod "downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.193913ms Jun 22 11:29:09.772: INFO: Pod "downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007204852s Jun 22 11:29:11.775: INFO: Pod "downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010408341s STEP: Saw pod success Jun 22 11:29:11.775: INFO: Pod "downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:29:11.777: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 11:29:11.818: INFO: Waiting for pod downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b to disappear Jun 22 11:29:11.856: INFO: Pod downwardapi-volume-95ebf84c-b47b-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:29:11.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wnf4j" for this suite. Jun 22 11:29:17.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:29:17.922: INFO: namespace: e2e-tests-projected-wnf4j, resource: bindings, ignored listing per whitelist Jun 22 11:29:18.010: INFO: namespace e2e-tests-projected-wnf4j deletion completed in 6.149422409s • [SLOW TEST:10.346 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:29:18.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-4g46 STEP: Creating a pod to test atomic-volume-subpath Jun 22 11:29:18.132: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4g46" in namespace "e2e-tests-subpath-9bgld" to be "success or failure" Jun 22 11:29:18.136: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572375ms Jun 22 11:29:20.153: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020573394s Jun 22 11:29:22.157: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024967227s Jun 22 11:29:24.161: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028333798s Jun 22 11:29:26.165: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 8.032930802s Jun 22 11:29:28.170: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 10.037927496s Jun 22 11:29:30.175: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 12.042465034s Jun 22 11:29:32.179: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 14.04696861s Jun 22 11:29:34.184: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 16.051616819s Jun 22 11:29:36.188: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 18.055970522s Jun 22 11:29:38.194: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 20.061254311s Jun 22 11:29:40.199: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 22.066052796s Jun 22 11:29:42.217: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Running", Reason="", readiness=false. Elapsed: 24.084829351s Jun 22 11:29:44.222: INFO: Pod "pod-subpath-test-projected-4g46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.089175059s STEP: Saw pod success Jun 22 11:29:44.222: INFO: Pod "pod-subpath-test-projected-4g46" satisfied condition "success or failure" Jun 22 11:29:44.225: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-4g46 container test-container-subpath-projected-4g46: STEP: delete the pod Jun 22 11:29:44.263: INFO: Waiting for pod pod-subpath-test-projected-4g46 to disappear Jun 22 11:29:44.275: INFO: Pod pod-subpath-test-projected-4g46 no longer exists STEP: Deleting pod pod-subpath-test-projected-4g46 Jun 22 11:29:44.275: INFO: Deleting pod "pod-subpath-test-projected-4g46" in namespace "e2e-tests-subpath-9bgld" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:29:44.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9bgld" for this suite. Jun 22 11:29:50.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:29:50.388: INFO: namespace: e2e-tests-subpath-9bgld, resource: bindings, ignored listing per whitelist Jun 22 11:29:50.392: INFO: namespace e2e-tests-subpath-9bgld deletion completed in 6.092038708s • [SLOW TEST:32.382 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:29:50.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-af69fd87-b47b-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-af69fd87-b47b-11ea-8cd8-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:29:58.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8x25q" for this suite. Jun 22 11:30:20.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:30:20.638: INFO: namespace: e2e-tests-projected-8x25q, resource: bindings, ignored listing per whitelist Jun 22 11:30:20.693: INFO: namespace e2e-tests-projected-8x25q deletion completed in 22.096025497s • [SLOW TEST:30.301 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:30:20.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 22 11:30:20.792: INFO: PodSpec: initContainers in spec.initContainers Jun 22 11:31:10.002: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c175d3de-b47b-11ea-8cd8-0242ac11001b", GenerateName:"", Namespace:"e2e-tests-init-container-snm5c", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-snm5c/pods/pod-init-c175d3de-b47b-11ea-8cd8-0242ac11001b", UID:"c1767a07-b47b-11ea-99e8-0242ac110002", ResourceVersion:"17287166", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728422220, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"792322443"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nx27c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fb4040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nx27c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nx27c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nx27c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dc20e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a00060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dc2170)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dc2190)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001dc2198), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001dc219c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422220, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422220, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422220, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422220, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.65", StartTime:(*v1.Time)(0xc002640040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d74070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d740e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a59ea4fb192dddcb3ee766325a600a8789057c83c8449a3855600eae64325062"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0026400a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002640080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:31:10.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-snm5c" for this suite. Jun 22 11:31:32.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:31:32.198: INFO: namespace: e2e-tests-init-container-snm5c, resource: bindings, ignored listing per whitelist Jun 22 11:31:32.251: INFO: namespace e2e-tests-init-container-snm5c deletion completed in 22.180138159s • [SLOW TEST:71.558 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:31:32.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 22 11:31:32.428: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-z6bjc,SelfLink:/api/v1/namespaces/e2e-tests-watch-z6bjc/configmaps/e2e-watch-test-resource-version,UID:ec1de91d-b47b-11ea-99e8-0242ac110002,ResourceVersion:17287231,Generation:0,CreationTimestamp:2020-06-22 11:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 11:31:32.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-z6bjc,SelfLink:/api/v1/namespaces/e2e-tests-watch-z6bjc/configmaps/e2e-watch-test-resource-version,UID:ec1de91d-b47b-11ea-99e8-0242ac110002,ResourceVersion:17287232,Generation:0,CreationTimestamp:2020-06-22 11:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:31:32.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-z6bjc" for this suite. Jun 22 11:31:38.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:31:38.563: INFO: namespace: e2e-tests-watch-z6bjc, resource: bindings, ignored listing per whitelist Jun 22 11:31:38.564: INFO: namespace e2e-tests-watch-z6bjc deletion completed in 6.113740503s • [SLOW TEST:6.313 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:31:38.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:31:38.679: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 22 11:31:38.691: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:38.694: INFO: Number of nodes with available pods: 0 Jun 22 11:31:38.694: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:31:39.701: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:39.704: INFO: Number of nodes with available pods: 0 Jun 22 11:31:39.704: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:31:40.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:40.702: INFO: Number of nodes with available pods: 0 Jun 22 11:31:40.702: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:31:41.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:41.703: INFO: Number of nodes with available pods: 0 Jun 22 11:31:41.703: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:31:42.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:42.703: INFO: Number of nodes with available pods: 0 Jun 22 11:31:42.703: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:31:43.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:43.705: INFO: Number of nodes with available pods: 2 Jun 22 11:31:43.705: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 22 11:31:43.757: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:43.757: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:43.787: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:44.793: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:44.793: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:44.797: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:45.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:45.792: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:45.795: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:46.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:46.792: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:46.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:47.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:47.792: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:47.792: INFO: Pod daemon-set-j99d8 is not available Jun 22 11:31:47.795: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:48.793: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:48.793: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:48.793: INFO: Pod daemon-set-j99d8 is not available Jun 22 11:31:48.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:49.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:49.792: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:49.792: INFO: Pod daemon-set-j99d8 is not available Jun 22 11:31:49.814: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:50.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:50.792: INFO: Wrong image for pod: daemon-set-j99d8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:50.792: INFO: Pod daemon-set-j99d8 is not available Jun 22 11:31:50.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:51.797: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:51.797: INFO: Pod daemon-set-rpxng is not available Jun 22 11:31:51.829: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:52.799: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:52.799: INFO: Pod daemon-set-rpxng is not available Jun 22 11:31:52.803: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:53.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:53.792: INFO: Pod daemon-set-rpxng is not available Jun 22 11:31:53.795: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:54.793: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:54.793: INFO: Pod daemon-set-rpxng is not available Jun 22 11:31:54.797: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:55.864: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:55.869: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:56.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:56.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:57.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:57.792: INFO: Pod daemon-set-gvv9l is not available Jun 22 11:31:57.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:58.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:58.792: INFO: Pod daemon-set-gvv9l is not available Jun 22 11:31:58.795: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:31:59.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:31:59.792: INFO: Pod daemon-set-gvv9l is not available Jun 22 11:31:59.795: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:32:00.792: INFO: Wrong image for pod: daemon-set-gvv9l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 11:32:00.792: INFO: Pod daemon-set-gvv9l is not available Jun 22 11:32:00.797: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:32:01.793: INFO: Pod daemon-set-lvlqb is not available Jun 22 11:32:01.797: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 22 11:32:01.800: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:32:01.803: INFO: Number of nodes with available pods: 1 Jun 22 11:32:01.803: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:32:02.808: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:32:02.811: INFO: Number of nodes with available pods: 1 Jun 22 11:32:02.811: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:32:03.808: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:32:03.812: INFO: Number of nodes with available pods: 1 Jun 22 11:32:03.812: INFO: Node hunter-worker is running more than one daemon pod Jun 22 11:32:04.807: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 11:32:04.809: INFO: Number of nodes with available pods: 2 Jun 22 11:32:04.809: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hwbv5, will wait for the garbage collector to delete the pods Jun 22 11:32:04.876: INFO: Deleting DaemonSet.extensions daemon-set took: 4.452784ms Jun 22 11:32:04.976: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.141419ms Jun 22 11:32:11.380: INFO: Number of nodes with available pods: 0 Jun 22 11:32:11.380: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 11:32:11.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hwbv5/daemonsets","resourceVersion":"17287392"},"items":null} Jun 22 11:32:11.386: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hwbv5/pods","resourceVersion":"17287392"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:32:11.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-hwbv5" for this suite. Jun 22 11:32:17.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:32:17.444: INFO: namespace: e2e-tests-daemonsets-hwbv5, resource: bindings, ignored listing per whitelist Jun 22 11:32:17.481: INFO: namespace e2e-tests-daemonsets-hwbv5 deletion completed in 6.083462535s • [SLOW TEST:38.917 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:32:17.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 22 11:32:21.611: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0712d0eb-b47c-11ea-8cd8-0242ac11001b", GenerateName:"", Namespace:"e2e-tests-pods-62wz4", SelfLink:"/api/v1/namespaces/e2e-tests-pods-62wz4/pods/pod-submit-remove-0712d0eb-b47c-11ea-8cd8-0242ac11001b", UID:"0713bd86-b47c-11ea-99e8-0242ac110002", ResourceVersion:"17287455", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728422337, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"583952968"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-c4bqm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b539c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c4bqm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e3c168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00260a960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e3c240)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e3c260)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e3c268), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e3c26c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422337, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422341, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422341, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422337, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.101", StartTime:(*v1.Time)(0xc001c33ce0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001c33d40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://be4e4d31c5acf8ccbedba77ff2ac1507e7d02593885b0eb626e3b534769d8873"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 22 11:32:26.621: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:32:26.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-62wz4" for this suite. Jun 22 11:32:32.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:32:32.783: INFO: namespace: e2e-tests-pods-62wz4, resource: bindings, ignored listing per whitelist Jun 22 11:32:32.783: INFO: namespace e2e-tests-pods-62wz4 deletion completed in 6.156041654s • [SLOW TEST:15.302 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:32:32.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-10345dd6-b47c-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:32:32.915: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-s2f8w" to be "success or failure" Jun 22 11:32:32.951: INFO: Pod "pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.835889ms Jun 22 11:32:34.954: INFO: Pod "pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038798011s Jun 22 11:32:36.957: INFO: Pod "pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042444451s STEP: Saw pod success Jun 22 11:32:36.957: INFO: Pod "pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:32:36.960: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 11:32:37.024: INFO: Waiting for pod pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:32:37.045: INFO: Pod pod-projected-configmaps-10350d69-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:32:37.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s2f8w" for this suite. Jun 22 11:32:43.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:32:43.091: INFO: namespace: e2e-tests-projected-s2f8w, resource: bindings, ignored listing per whitelist Jun 22 11:32:43.140: INFO: namespace e2e-tests-projected-s2f8w deletion completed in 6.092316673s • [SLOW TEST:10.357 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:32:43.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:32:43.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jun 22 11:32:43.310: INFO: stderr: "" Jun 22 11:32:43.310: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:07:46Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 22 11:32:43.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5n2nk' Jun 22 11:32:43.554: INFO: stderr: "" Jun 22 11:32:43.554: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 22 11:32:43.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5n2nk' Jun 22 11:32:43.847: INFO: stderr: "" Jun 22 11:32:43.847: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 22 11:32:44.852: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:32:44.852: INFO: Found 0 / 1 Jun 22 11:32:45.852: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:32:45.852: INFO: Found 0 / 1 Jun 22 11:32:46.852: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:32:46.852: INFO: Found 0 / 1 Jun 22 11:32:47.852: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:32:47.852: INFO: Found 1 / 1 Jun 22 11:32:47.852: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 11:32:47.856: INFO: Selector matched 1 pods for map[app:redis] Jun 22 11:32:47.856: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 11:32:47.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-l6z9x --namespace=e2e-tests-kubectl-5n2nk' Jun 22 11:32:47.988: INFO: stderr: "" Jun 22 11:32:47.989: INFO: stdout: "Name: redis-master-l6z9x\nNamespace: e2e-tests-kubectl-5n2nk\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Mon, 22 Jun 2020 11:32:43 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.102\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://e9fc7ff88f34fed78dc7c7e52886258beb95a5a5a25672ebc17935dcade856bb\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 22 Jun 2020 11:32:46 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ttgv8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ttgv8:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ttgv8\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-5n2nk/redis-master-l6z9x to hunter-worker2\n Normal Pulled 2s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Jun 22 11:32:47.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-5n2nk' Jun 22 11:32:48.122: INFO: stderr: "" Jun 22 11:32:48.122: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-5n2nk\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-l6z9x\n" Jun 22 11:32:48.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-5n2nk' Jun 22 11:32:48.237: INFO: stderr: "" Jun 22 11:32:48.237: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-5n2nk\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.101.153.26\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.102:6379\nSession Affinity: None\nEvents: \n" Jun 22 11:32:48.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jun 22 11:32:48.372: INFO: stderr: "" Jun 22 11:32:48.372: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 22 Jun 2020 11:32:42 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 22 Jun 2020 11:32:42 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 22 Jun 2020 11:32:42 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 22 Jun 2020 11:32:42 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 98d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 98d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 98d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 22 11:32:48.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-5n2nk' Jun 22 11:32:48.478: INFO: stderr: "" Jun 22 11:32:48.478: INFO: stdout: "Name: e2e-tests-kubectl-5n2nk\nLabels: e2e-framework=kubectl\n e2e-run=afcb97dc-b475-11ea-8cd8-0242ac11001b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:32:48.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5n2nk" for this suite. Jun 22 11:33:10.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:33:10.544: INFO: namespace: e2e-tests-kubectl-5n2nk, resource: bindings, ignored listing per whitelist Jun 22 11:33:10.592: INFO: namespace e2e-tests-kubectl-5n2nk deletion completed in 22.11009275s • [SLOW TEST:27.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:33:10.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-scsbf [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-scsbf STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-scsbf Jun 22 11:33:10.747: INFO: Found 0 stateful pods, waiting for 1 Jun 22 11:33:20.752: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 22 11:33:20.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:33:21.041: INFO: stderr: "I0622 11:33:20.879823 1681 log.go:172] (0xc00013a840) (0xc00076c640) Create stream\nI0622 11:33:20.879890 1681 log.go:172] (0xc00013a840) (0xc00076c640) Stream added, broadcasting: 1\nI0622 11:33:20.882793 1681 log.go:172] (0xc00013a840) Reply frame received for 1\nI0622 11:33:20.882836 1681 log.go:172] (0xc00013a840) (0xc0005c4dc0) Create stream\nI0622 11:33:20.882849 1681 log.go:172] (0xc00013a840) (0xc0005c4dc0) Stream added, broadcasting: 3\nI0622 11:33:20.883745 1681 log.go:172] (0xc00013a840) Reply frame received for 3\nI0622 11:33:20.883796 1681 log.go:172] (0xc00013a840) (0xc0006fc000) Create stream\nI0622 11:33:20.883815 1681 log.go:172] (0xc00013a840) (0xc0006fc000) Stream added, broadcasting: 5\nI0622 11:33:20.884695 1681 log.go:172] (0xc00013a840) Reply frame received for 5\nI0622 11:33:21.033775 1681 log.go:172] (0xc00013a840) Data frame received for 3\nI0622 11:33:21.033826 1681 log.go:172] (0xc0005c4dc0) (3) Data frame handling\nI0622 11:33:21.033859 1681 log.go:172] (0xc0005c4dc0) (3) Data frame sent\nI0622 11:33:21.034021 1681 log.go:172] (0xc00013a840) Data frame received for 3\nI0622 11:33:21.034043 1681 log.go:172] (0xc0005c4dc0) (3) Data frame handling\nI0622 11:33:21.034201 1681 log.go:172] (0xc00013a840) Data frame received for 5\nI0622 11:33:21.034229 1681 log.go:172] (0xc0006fc000) (5) Data frame handling\nI0622 11:33:21.036297 1681 log.go:172] (0xc00013a840) Data frame received for 1\nI0622 11:33:21.036311 1681 log.go:172] (0xc00076c640) (1) Data frame handling\nI0622 11:33:21.036334 1681 log.go:172] (0xc00076c640) (1) Data frame sent\nI0622 11:33:21.036547 1681 log.go:172] (0xc00013a840) (0xc00076c640) Stream removed, broadcasting: 1\nI0622 11:33:21.036569 1681 log.go:172] (0xc00013a840) Go away received\nI0622 11:33:21.036816 1681 log.go:172] (0xc00013a840) (0xc00076c640) Stream removed, broadcasting: 1\nI0622 11:33:21.036844 1681 log.go:172] (0xc00013a840) (0xc0005c4dc0) Stream removed, broadcasting: 3\nI0622 11:33:21.036860 1681 log.go:172] (0xc00013a840) (0xc0006fc000) Stream removed, broadcasting: 5\n" Jun 22 11:33:21.041: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:33:21.041: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:33:21.046: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 22 11:33:31.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:33:31.052: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:33:31.073: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:33:31.073: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:33:31.073: INFO: Jun 22 11:33:31.073: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 22 11:33:32.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988843502s Jun 22 11:33:33.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984706225s Jun 22 11:33:34.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.944051855s Jun 22 11:33:35.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.939466198s Jun 22 11:33:36.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.934201728s Jun 22 11:33:37.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.92926987s Jun 22 11:33:38.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.92475902s Jun 22 11:33:39.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.919221866s Jun 22 11:33:40.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 913.748044ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-scsbf Jun 22 11:33:41.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:33:41.397: INFO: stderr: "I0622 11:33:41.308556 1704 log.go:172] (0xc000138630) (0xc000690640) Create stream\nI0622 11:33:41.308637 1704 log.go:172] (0xc000138630) (0xc000690640) Stream added, broadcasting: 1\nI0622 11:33:41.311708 1704 log.go:172] (0xc000138630) Reply frame received for 1\nI0622 11:33:41.311759 1704 log.go:172] (0xc000138630) (0xc00052ac80) Create stream\nI0622 11:33:41.311786 1704 log.go:172] (0xc000138630) (0xc00052ac80) Stream added, broadcasting: 3\nI0622 11:33:41.312693 1704 log.go:172] (0xc000138630) Reply frame received for 3\nI0622 11:33:41.312754 1704 log.go:172] (0xc000138630) (0xc0006a0000) Create stream\nI0622 11:33:41.312777 1704 log.go:172] (0xc000138630) (0xc0006a0000) Stream added, broadcasting: 5\nI0622 11:33:41.313900 1704 log.go:172] (0xc000138630) Reply frame received for 5\nI0622 11:33:41.391388 1704 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:33:41.391452 1704 log.go:172] (0xc00052ac80) (3) Data frame handling\nI0622 11:33:41.391470 1704 log.go:172] (0xc00052ac80) (3) Data frame sent\nI0622 11:33:41.391482 1704 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:33:41.391491 1704 log.go:172] (0xc00052ac80) (3) Data frame handling\nI0622 11:33:41.391530 1704 log.go:172] (0xc000138630) Data frame received for 5\nI0622 11:33:41.391555 1704 log.go:172] (0xc0006a0000) (5) Data frame handling\nI0622 11:33:41.392822 1704 log.go:172] (0xc000138630) Data frame received for 1\nI0622 11:33:41.392839 1704 log.go:172] (0xc000690640) (1) Data frame handling\nI0622 11:33:41.392852 1704 log.go:172] (0xc000690640) (1) Data frame sent\nI0622 11:33:41.392862 1704 log.go:172] (0xc000138630) (0xc000690640) Stream removed, broadcasting: 1\nI0622 11:33:41.392937 1704 log.go:172] (0xc000138630) Go away received\nI0622 11:33:41.393300 1704 log.go:172] (0xc000138630) (0xc000690640) Stream removed, broadcasting: 1\nI0622 11:33:41.393331 1704 log.go:172] (0xc000138630) (0xc00052ac80) Stream removed, broadcasting: 3\nI0622 11:33:41.393344 1704 log.go:172] (0xc000138630) (0xc0006a0000) Stream removed, broadcasting: 5\n" Jun 22 11:33:41.397: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:33:41.397: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:33:41.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:33:41.609: INFO: stderr: "I0622 11:33:41.515369 1727 log.go:172] (0xc000660420) (0xc00065d400) Create stream\nI0622 11:33:41.515424 1727 log.go:172] (0xc000660420) (0xc00065d400) Stream added, broadcasting: 1\nI0622 11:33:41.518024 1727 log.go:172] (0xc000660420) Reply frame received for 1\nI0622 11:33:41.518053 1727 log.go:172] (0xc000660420) (0xc000298000) Create stream\nI0622 11:33:41.518060 1727 log.go:172] (0xc000660420) (0xc000298000) Stream added, broadcasting: 3\nI0622 11:33:41.518799 1727 log.go:172] (0xc000660420) Reply frame received for 3\nI0622 11:33:41.518825 1727 log.go:172] (0xc000660420) (0xc00065d4a0) Create stream\nI0622 11:33:41.518833 1727 log.go:172] (0xc000660420) (0xc00065d4a0) Stream added, broadcasting: 5\nI0622 11:33:41.519632 1727 log.go:172] (0xc000660420) Reply frame received for 5\nI0622 11:33:41.602391 1727 log.go:172] (0xc000660420) Data frame received for 5\nI0622 11:33:41.602419 1727 log.go:172] (0xc00065d4a0) (5) Data frame handling\nI0622 11:33:41.602442 1727 log.go:172] (0xc000660420) Data frame received for 3\nmv: can't rename '/tmp/index.html': No such file or directory\nI0622 11:33:41.602617 1727 log.go:172] (0xc00065d4a0) (5) Data frame sent\nI0622 11:33:41.602647 1727 log.go:172] (0xc000298000) (3) Data frame handling\nI0622 11:33:41.602666 1727 log.go:172] (0xc000298000) (3) Data frame sent\nI0622 11:33:41.602680 1727 log.go:172] (0xc000660420) Data frame received for 3\nI0622 11:33:41.602687 1727 log.go:172] (0xc000298000) (3) Data frame handling\nI0622 11:33:41.602739 1727 log.go:172] (0xc000660420) Data frame received for 5\nI0622 11:33:41.602772 1727 log.go:172] (0xc00065d4a0) (5) Data frame handling\nI0622 11:33:41.604415 1727 log.go:172] (0xc000660420) Data frame received for 1\nI0622 11:33:41.604436 1727 log.go:172] (0xc00065d400) (1) Data frame handling\nI0622 11:33:41.604468 1727 log.go:172] (0xc00065d400) (1) Data frame sent\nI0622 11:33:41.604489 1727 log.go:172] (0xc000660420) (0xc00065d400) Stream removed, broadcasting: 1\nI0622 11:33:41.604522 1727 log.go:172] (0xc000660420) Go away received\nI0622 11:33:41.604673 1727 log.go:172] (0xc000660420) (0xc00065d400) Stream removed, broadcasting: 1\nI0622 11:33:41.604688 1727 log.go:172] (0xc000660420) (0xc000298000) Stream removed, broadcasting: 3\nI0622 11:33:41.604696 1727 log.go:172] (0xc000660420) (0xc00065d4a0) Stream removed, broadcasting: 5\n" Jun 22 11:33:41.609: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:33:41.609: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:33:41.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:33:41.830: INFO: stderr: "I0622 11:33:41.745831 1750 log.go:172] (0xc0008322c0) (0xc0007a52c0) Create stream\nI0622 11:33:41.745891 1750 log.go:172] (0xc0008322c0) (0xc0007a52c0) Stream added, broadcasting: 1\nI0622 11:33:41.748682 1750 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0622 11:33:41.748748 1750 log.go:172] (0xc0008322c0) (0xc000550000) Create stream\nI0622 11:33:41.748779 1750 log.go:172] (0xc0008322c0) (0xc000550000) Stream added, broadcasting: 3\nI0622 11:33:41.750212 1750 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0622 11:33:41.750288 1750 log.go:172] (0xc0008322c0) (0xc0007a5360) Create stream\nI0622 11:33:41.750319 1750 log.go:172] (0xc0008322c0) (0xc0007a5360) Stream added, broadcasting: 5\nI0622 11:33:41.751533 1750 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0622 11:33:41.821829 1750 log.go:172] (0xc0008322c0) Data frame received for 5\nI0622 11:33:41.821889 1750 log.go:172] (0xc0008322c0) Data frame received for 3\nI0622 11:33:41.821938 1750 log.go:172] (0xc000550000) (3) Data frame handling\nI0622 11:33:41.821968 1750 log.go:172] (0xc000550000) (3) Data frame sent\nI0622 11:33:41.821988 1750 log.go:172] (0xc0008322c0) Data frame received for 3\nI0622 11:33:41.822010 1750 log.go:172] (0xc000550000) (3) Data frame handling\nI0622 11:33:41.822035 1750 log.go:172] (0xc0007a5360) (5) Data frame handling\nI0622 11:33:41.822086 1750 log.go:172] (0xc0007a5360) (5) Data frame sent\nI0622 11:33:41.822106 1750 log.go:172] (0xc0008322c0) Data frame received for 5\nI0622 11:33:41.822122 1750 log.go:172] (0xc0007a5360) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0622 11:33:41.823604 1750 log.go:172] (0xc0008322c0) Data frame received for 1\nI0622 11:33:41.823634 1750 log.go:172] (0xc0007a52c0) (1) Data frame handling\nI0622 11:33:41.823665 1750 log.go:172] (0xc0007a52c0) (1) Data frame sent\nI0622 11:33:41.823698 1750 log.go:172] (0xc0008322c0) (0xc0007a52c0) Stream removed, broadcasting: 1\nI0622 11:33:41.823725 1750 log.go:172] (0xc0008322c0) Go away received\nI0622 11:33:41.823943 1750 log.go:172] (0xc0008322c0) (0xc0007a52c0) Stream removed, broadcasting: 1\nI0622 11:33:41.823968 1750 log.go:172] (0xc0008322c0) (0xc000550000) Stream removed, broadcasting: 3\nI0622 11:33:41.823985 1750 log.go:172] (0xc0008322c0) (0xc0007a5360) Stream removed, broadcasting: 5\n" Jun 22 11:33:41.830: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:33:41.830: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:33:41.834: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 22 11:33:51.839: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:33:51.839: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:33:51.839: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 22 11:33:51.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:33:52.062: INFO: stderr: "I0622 11:33:51.976614 1772 log.go:172] (0xc0007c4160) (0xc0005ea5a0) Create stream\nI0622 11:33:51.976671 1772 log.go:172] (0xc0007c4160) (0xc0005ea5a0) Stream added, broadcasting: 1\nI0622 11:33:51.979219 1772 log.go:172] (0xc0007c4160) Reply frame received for 1\nI0622 11:33:51.979265 1772 log.go:172] (0xc0007c4160) (0xc000726d20) Create stream\nI0622 11:33:51.979280 1772 log.go:172] (0xc0007c4160) (0xc000726d20) Stream added, broadcasting: 3\nI0622 11:33:51.980482 1772 log.go:172] (0xc0007c4160) Reply frame received for 3\nI0622 11:33:51.980565 1772 log.go:172] (0xc0007c4160) (0xc0005ca000) Create stream\nI0622 11:33:51.980592 1772 log.go:172] (0xc0007c4160) (0xc0005ca000) Stream added, broadcasting: 5\nI0622 11:33:51.981763 1772 log.go:172] (0xc0007c4160) Reply frame received for 5\nI0622 11:33:52.055919 1772 log.go:172] (0xc0007c4160) Data frame received for 5\nI0622 11:33:52.055946 1772 log.go:172] (0xc0005ca000) (5) Data frame handling\nI0622 11:33:52.055963 1772 log.go:172] (0xc0007c4160) Data frame received for 3\nI0622 11:33:52.055973 1772 log.go:172] (0xc000726d20) (3) Data frame handling\nI0622 11:33:52.055989 1772 log.go:172] (0xc000726d20) (3) Data frame sent\nI0622 11:33:52.056000 1772 log.go:172] (0xc0007c4160) Data frame received for 3\nI0622 11:33:52.056018 1772 log.go:172] (0xc000726d20) (3) Data frame handling\nI0622 11:33:52.057786 1772 log.go:172] (0xc0007c4160) Data frame received for 1\nI0622 11:33:52.057817 1772 log.go:172] (0xc0005ea5a0) (1) Data frame handling\nI0622 11:33:52.057862 1772 log.go:172] (0xc0005ea5a0) (1) Data frame sent\nI0622 11:33:52.057895 1772 log.go:172] (0xc0007c4160) (0xc0005ea5a0) Stream removed, broadcasting: 1\nI0622 11:33:52.057911 1772 log.go:172] (0xc0007c4160) Go away received\nI0622 11:33:52.058088 1772 log.go:172] (0xc0007c4160) (0xc0005ea5a0) Stream removed, broadcasting: 1\nI0622 11:33:52.058151 1772 log.go:172] (0xc0007c4160) (0xc000726d20) Stream removed, broadcasting: 3\nI0622 11:33:52.058170 1772 log.go:172] (0xc0007c4160) (0xc0005ca000) Stream removed, broadcasting: 5\n" Jun 22 11:33:52.063: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:33:52.063: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:33:52.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:33:52.327: INFO: stderr: "I0622 11:33:52.199840 1794 log.go:172] (0xc000162840) (0xc000768640) Create stream\nI0622 11:33:52.199921 1794 log.go:172] (0xc000162840) (0xc000768640) Stream added, broadcasting: 1\nI0622 11:33:52.202816 1794 log.go:172] (0xc000162840) Reply frame received for 1\nI0622 11:33:52.202854 1794 log.go:172] (0xc000162840) (0xc0005e0c80) Create stream\nI0622 11:33:52.202863 1794 log.go:172] (0xc000162840) (0xc0005e0c80) Stream added, broadcasting: 3\nI0622 11:33:52.203797 1794 log.go:172] (0xc000162840) Reply frame received for 3\nI0622 11:33:52.203835 1794 log.go:172] (0xc000162840) (0xc0005e0dc0) Create stream\nI0622 11:33:52.203845 1794 log.go:172] (0xc000162840) (0xc0005e0dc0) Stream added, broadcasting: 5\nI0622 11:33:52.204699 1794 log.go:172] (0xc000162840) Reply frame received for 5\nI0622 11:33:52.320555 1794 log.go:172] (0xc000162840) Data frame received for 3\nI0622 11:33:52.320581 1794 log.go:172] (0xc0005e0c80) (3) Data frame handling\nI0622 11:33:52.320595 1794 log.go:172] (0xc0005e0c80) (3) Data frame sent\nI0622 11:33:52.320601 1794 log.go:172] (0xc000162840) Data frame received for 3\nI0622 11:33:52.320605 1794 log.go:172] (0xc0005e0c80) (3) Data frame handling\nI0622 11:33:52.320643 1794 log.go:172] (0xc000162840) Data frame received for 5\nI0622 11:33:52.320666 1794 log.go:172] (0xc0005e0dc0) (5) Data frame handling\nI0622 11:33:52.322626 1794 log.go:172] (0xc000162840) Data frame received for 1\nI0622 11:33:52.322651 1794 log.go:172] (0xc000768640) (1) Data frame handling\nI0622 11:33:52.322664 1794 log.go:172] (0xc000768640) (1) Data frame sent\nI0622 11:33:52.322671 1794 log.go:172] (0xc000162840) (0xc000768640) Stream removed, broadcasting: 1\nI0622 11:33:52.322679 1794 log.go:172] (0xc000162840) Go away received\nI0622 11:33:52.322918 1794 log.go:172] (0xc000162840) (0xc000768640) Stream removed, broadcasting: 1\nI0622 11:33:52.322929 1794 log.go:172] (0xc000162840) (0xc0005e0c80) Stream removed, broadcasting: 3\nI0622 11:33:52.322934 1794 log.go:172] (0xc000162840) (0xc0005e0dc0) Stream removed, broadcasting: 5\n" Jun 22 11:33:52.327: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:33:52.327: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:33:52.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-scsbf ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:33:52.601: INFO: stderr: "I0622 11:33:52.476195 1817 log.go:172] (0xc0008782c0) (0xc000782640) Create stream\nI0622 11:33:52.476263 1817 log.go:172] (0xc0008782c0) (0xc000782640) Stream added, broadcasting: 1\nI0622 11:33:52.479952 1817 log.go:172] (0xc0008782c0) Reply frame received for 1\nI0622 11:33:52.480009 1817 log.go:172] (0xc0008782c0) (0xc0007826e0) Create stream\nI0622 11:33:52.480024 1817 log.go:172] (0xc0008782c0) (0xc0007826e0) Stream added, broadcasting: 3\nI0622 11:33:52.481004 1817 log.go:172] (0xc0008782c0) Reply frame received for 3\nI0622 11:33:52.481052 1817 log.go:172] (0xc0008782c0) (0xc000134dc0) Create stream\nI0622 11:33:52.481069 1817 log.go:172] (0xc0008782c0) (0xc000134dc0) Stream added, broadcasting: 5\nI0622 11:33:52.482396 1817 log.go:172] (0xc0008782c0) Reply frame received for 5\nI0622 11:33:52.594383 1817 log.go:172] (0xc0008782c0) Data frame received for 3\nI0622 11:33:52.594417 1817 log.go:172] (0xc0007826e0) (3) Data frame handling\nI0622 11:33:52.594439 1817 log.go:172] (0xc0007826e0) (3) Data frame sent\nI0622 11:33:52.594449 1817 log.go:172] (0xc0008782c0) Data frame received for 3\nI0622 11:33:52.594455 1817 log.go:172] (0xc0007826e0) (3) Data frame handling\nI0622 11:33:52.594757 1817 log.go:172] (0xc0008782c0) Data frame received for 5\nI0622 11:33:52.594799 1817 log.go:172] (0xc000134dc0) (5) Data frame handling\nI0622 11:33:52.596491 1817 log.go:172] (0xc0008782c0) Data frame received for 1\nI0622 11:33:52.596531 1817 log.go:172] (0xc000782640) (1) Data frame handling\nI0622 11:33:52.596569 1817 log.go:172] (0xc000782640) (1) Data frame sent\nI0622 11:33:52.596616 1817 log.go:172] (0xc0008782c0) (0xc000782640) Stream removed, broadcasting: 1\nI0622 11:33:52.596652 1817 log.go:172] (0xc0008782c0) Go away received\nI0622 11:33:52.596918 1817 log.go:172] (0xc0008782c0) (0xc000782640) Stream removed, broadcasting: 1\nI0622 11:33:52.596943 1817 log.go:172] (0xc0008782c0) (0xc0007826e0) Stream removed, broadcasting: 3\nI0622 11:33:52.596955 1817 log.go:172] (0xc0008782c0) (0xc000134dc0) Stream removed, broadcasting: 5\n" Jun 22 11:33:52.601: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:33:52.601: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:33:52.601: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:33:52.639: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 22 11:34:02.647: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:34:02.647: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:34:02.647: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:34:02.661: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:02.661: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:02.661: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:02.661: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:02.661: INFO: Jun 22 11:34:02.662: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:03.832: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:03.832: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:03.832: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:03.832: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:03.832: INFO: Jun 22 11:34:03.832: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:04.838: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:04.838: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:04.838: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:04.838: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:04.838: INFO: Jun 22 11:34:04.838: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:05.843: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:05.843: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:05.843: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:05.843: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:05.843: INFO: Jun 22 11:34:05.843: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:06.848: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:06.848: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:06.849: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:06.849: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:06.849: INFO: Jun 22 11:34:06.849: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:07.887: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:07.887: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:07.887: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:07.887: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:07.887: INFO: Jun 22 11:34:07.887: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:08.893: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:08.893: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:08.893: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:08.893: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:08.893: INFO: Jun 22 11:34:08.893: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:09.903: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:09.903: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:09.903: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:09.904: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:09.904: INFO: Jun 22 11:34:09.904: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:10.909: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 11:34:10.909: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:10 +0000 UTC }] Jun 22 11:34:10.909: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:10.909: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:33:31 +0000 UTC }] Jun 22 11:34:10.909: INFO: Jun 22 11:34:10.909: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 11:34:11.913: INFO: Verifying statefulset ss doesn't scale past 0 for another 744.591392ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-scsbf Jun 22 11:34:12.917: INFO: Scaling statefulset ss to 0 Jun 22 11:34:12.927: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 22 11:34:12.930: INFO: Deleting all statefulset in ns e2e-tests-statefulset-scsbf Jun 22 11:34:12.932: INFO: Scaling statefulset ss to 0 Jun 22 11:34:12.940: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:34:12.943: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:34:12.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-scsbf" for this suite. Jun 22 11:34:18.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:34:18.999: INFO: namespace: e2e-tests-statefulset-scsbf, resource: bindings, ignored listing per whitelist Jun 22 11:34:19.050: INFO: namespace e2e-tests-statefulset-scsbf deletion completed in 6.091701758s • [SLOW TEST:68.457 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:34:19.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4f8ffec0-b47c-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:34:19.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-65xz6" to be "success or failure" Jun 22 11:34:19.254: INFO: Pod "pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.625648ms Jun 22 11:34:21.258: INFO: Pod "pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033197523s Jun 22 11:34:23.262: INFO: Pod "pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.03658266s Jun 22 11:34:25.265: INFO: Pod "pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040105266s STEP: Saw pod success Jun 22 11:34:25.265: INFO: Pod "pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:34:25.267: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 11:34:25.298: INFO: Waiting for pod pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:34:25.364: INFO: Pod pod-configmaps-4f92c3ef-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:34:25.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-65xz6" for this suite. Jun 22 11:34:31.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:34:31.454: INFO: namespace: e2e-tests-configmap-65xz6, resource: bindings, ignored listing per whitelist Jun 22 11:34:31.486: INFO: namespace e2e-tests-configmap-65xz6 deletion completed in 6.117838589s • [SLOW TEST:12.436 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:34:31.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2xzkq Jun 22 11:34:35.634: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2xzkq STEP: checking the pod's current state and verifying that restartCount is present Jun 22 11:34:35.638: INFO: Initial restart count of pod liveness-http is 0 Jun 22 11:34:55.821: INFO: Restart count of pod e2e-tests-container-probe-2xzkq/liveness-http is now 1 (20.183907731s elapsed) Jun 22 11:35:15.950: INFO: Restart count of pod e2e-tests-container-probe-2xzkq/liveness-http is now 2 (40.311979519s elapsed) Jun 22 11:35:36.027: INFO: Restart count of pod e2e-tests-container-probe-2xzkq/liveness-http is now 3 (1m0.38969798s elapsed) Jun 22 11:35:56.068: INFO: Restart count of pod e2e-tests-container-probe-2xzkq/liveness-http is now 4 (1m20.430853206s elapsed) Jun 22 11:36:58.196: INFO: Restart count of pod e2e-tests-container-probe-2xzkq/liveness-http is now 5 (2m22.558115523s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:36:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2xzkq" for this suite. Jun 22 11:37:04.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:37:04.381: INFO: namespace: e2e-tests-container-probe-2xzkq, resource: bindings, ignored listing per whitelist Jun 22 11:37:04.441: INFO: namespace e2e-tests-container-probe-2xzkq deletion completed in 6.19489927s • [SLOW TEST:152.956 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:37:04.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-2jjzg/configmap-test-b24188b3-b47c-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:37:04.823: INFO: Waiting up to 5m0s for pod "pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-2jjzg" to be "success or failure" Jun 22 11:37:04.923: INFO: Pod "pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 99.55651ms Jun 22 11:37:06.927: INFO: Pod "pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10318373s Jun 22 11:37:08.983: INFO: Pod "pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159586207s STEP: Saw pod success Jun 22 11:37:08.983: INFO: Pod "pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:37:08.986: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b container env-test: STEP: delete the pod Jun 22 11:37:09.170: INFO: Waiting for pod pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:37:09.191: INFO: Pod pod-configmaps-b245f079-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:37:09.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2jjzg" for this suite. Jun 22 11:37:15.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:37:15.260: INFO: namespace: e2e-tests-configmap-2jjzg, resource: bindings, ignored listing per whitelist Jun 22 11:37:15.286: INFO: namespace e2e-tests-configmap-2jjzg deletion completed in 6.091952635s • [SLOW TEST:10.845 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:37:15.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-kcn72 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-kcn72 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-kcn72 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-kcn72 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-kcn72 Jun 22 11:37:19.544: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-kcn72, name: ss-0, uid: b8cac86f-b47c-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Jun 22 11:37:21.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-kcn72, name: ss-0, uid: b8cac86f-b47c-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 22 11:37:21.257: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-kcn72, name: ss-0, uid: b8cac86f-b47c-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 22 11:37:21.339: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-kcn72 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-kcn72 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-kcn72 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 22 11:37:27.693: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kcn72 Jun 22 11:37:27.695: INFO: Scaling statefulset ss to 0 Jun 22 11:37:47.759: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:37:47.762: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:37:47.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-kcn72" for this suite. Jun 22 11:37:53.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:37:53.898: INFO: namespace: e2e-tests-statefulset-kcn72, resource: bindings, ignored listing per whitelist Jun 22 11:37:53.900: INFO: namespace e2e-tests-statefulset-kcn72 deletion completed in 6.118467717s • [SLOW TEST:38.614 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:37:53.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 11:37:53.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-92dz7' Jun 22 11:37:57.255: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 11:37:57.255: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jun 22 11:37:57.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-92dz7' Jun 22 11:37:57.386: INFO: stderr: "" Jun 22 11:37:57.386: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:37:57.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-92dz7" for this suite. Jun 22 11:38:19.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:38:19.547: INFO: namespace: e2e-tests-kubectl-92dz7, resource: bindings, ignored listing per whitelist Jun 22 11:38:19.547: INFO: namespace e2e-tests-kubectl-92dz7 deletion completed in 22.139378096s • [SLOW TEST:25.647 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:38:19.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jun 22 11:38:19.681: INFO: Waiting up to 5m0s for pod "client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-containers-znjz5" to be "success or failure" Jun 22 11:38:19.686: INFO: Pod "client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161786ms Jun 22 11:38:21.696: INFO: Pod "client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014226129s Jun 22 11:38:23.727: INFO: Pod "client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045167579s Jun 22 11:38:25.730: INFO: Pod "client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048994339s STEP: Saw pod success Jun 22 11:38:25.731: INFO: Pod "client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:38:25.733: INFO: Trying to get logs from node hunter-worker2 pod client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:38:25.919: INFO: Waiting for pod client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:38:25.923: INFO: Pod client-containers-dee3d1a2-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:38:25.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-znjz5" for this suite. Jun 22 11:38:31.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:38:31.982: INFO: namespace: e2e-tests-containers-znjz5, resource: bindings, ignored listing per whitelist Jun 22 11:38:32.064: INFO: namespace e2e-tests-containers-znjz5 deletion completed in 6.13690788s • [SLOW TEST:12.516 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:38:32.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e65a0866-b47c-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:38:32.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-b84k6" to be "success or failure" Jun 22 11:38:32.203: INFO: Pod "pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.620825ms Jun 22 11:38:34.236: INFO: Pod "pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03630724s Jun 22 11:38:36.240: INFO: Pod "pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040958391s STEP: Saw pod success Jun 22 11:38:36.240: INFO: Pod "pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:38:36.243: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 11:38:36.315: INFO: Waiting for pod pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:38:36.326: INFO: Pod pod-projected-configmaps-e65a6592-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:38:36.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b84k6" for this suite. Jun 22 11:38:42.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:38:42.369: INFO: namespace: e2e-tests-projected-b84k6, resource: bindings, ignored listing per whitelist Jun 22 11:38:42.430: INFO: namespace e2e-tests-projected-b84k6 deletion completed in 6.101736989s • [SLOW TEST:10.366 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:38:42.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 22 11:38:42.640: INFO: Waiting up to 5m0s for pod "downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-c4mtq" to be "success or failure" Jun 22 11:38:42.734: INFO: Pod "downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 93.77989ms Jun 22 11:38:44.738: INFO: Pod "downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098586094s Jun 22 11:38:46.743: INFO: Pod "downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103121832s Jun 22 11:38:48.747: INFO: Pod "downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107422201s STEP: Saw pod success Jun 22 11:38:48.747: INFO: Pod "downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:38:48.751: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 11:38:48.795: INFO: Waiting for pod downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:38:48.799: INFO: Pod downward-api-ec93bc87-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:38:48.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c4mtq" for this suite. Jun 22 11:38:54.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:38:54.903: INFO: namespace: e2e-tests-downward-api-c4mtq, resource: bindings, ignored listing per whitelist Jun 22 11:38:54.943: INFO: namespace e2e-tests-downward-api-c4mtq deletion completed in 6.119917372s • [SLOW TEST:12.513 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:38:54.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f3fe80a5-b47c-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:38:55.128: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-ts6gh" to be "success or failure" Jun 22 11:38:55.144: INFO: Pod "pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.204791ms Jun 22 11:38:57.149: INFO: Pod "pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021212146s Jun 22 11:38:59.153: INFO: Pod "pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025287568s STEP: Saw pod success Jun 22 11:38:59.153: INFO: Pod "pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:38:59.156: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 11:38:59.193: INFO: Waiting for pod pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b to disappear Jun 22 11:38:59.210: INFO: Pod pod-projected-configmaps-f4042967-b47c-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:38:59.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ts6gh" for this suite. Jun 22 11:39:05.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:39:05.425: INFO: namespace: e2e-tests-projected-ts6gh, resource: bindings, ignored listing per whitelist Jun 22 11:39:05.439: INFO: namespace e2e-tests-projected-ts6gh deletion completed in 6.224877735s • [SLOW TEST:10.495 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:39:05.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 22 11:39:10.089: INFO: Successfully updated pod "labelsupdatefa3aa730-b47c-11ea-8cd8-0242ac11001b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:39:14.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hjv5v" for this suite. Jun 22 11:39:36.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:39:36.260: INFO: namespace: e2e-tests-projected-hjv5v, resource: bindings, ignored listing per whitelist Jun 22 11:39:36.294: INFO: namespace e2e-tests-projected-hjv5v deletion completed in 22.112425025s • [SLOW TEST:30.854 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:39:36.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0c9f9004-b47d-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 11:39:36.404: INFO: Waiting up to 5m0s for pod "pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-gxpck" to be "success or failure" Jun 22 11:39:36.424: INFO: Pod "pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.548952ms Jun 22 11:39:38.427: INFO: Pod "pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02378903s Jun 22 11:39:40.431: INFO: Pod "pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027769165s STEP: Saw pod success Jun 22 11:39:40.432: INFO: Pod "pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:39:40.434: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 11:39:40.510: INFO: Waiting for pod pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:39:40.531: INFO: Pod pod-secrets-0ca03468-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:39:40.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gxpck" for this suite. Jun 22 11:39:46.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:39:46.613: INFO: namespace: e2e-tests-secrets-gxpck, resource: bindings, ignored listing per whitelist Jun 22 11:39:46.630: INFO: namespace e2e-tests-secrets-gxpck deletion completed in 6.09523375s • [SLOW TEST:10.336 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:39:46.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 11:39:46.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sqgx4' Jun 22 11:39:46.877: INFO: stderr: "" Jun 22 11:39:46.877: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jun 22 11:39:46.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-sqgx4' Jun 22 11:39:51.067: INFO: stderr: "" Jun 22 11:39:51.067: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:39:51.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sqgx4" for this suite. Jun 22 11:39:57.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:39:57.156: INFO: namespace: e2e-tests-kubectl-sqgx4, resource: bindings, ignored listing per whitelist Jun 22 11:39:57.201: INFO: namespace e2e-tests-kubectl-sqgx4 deletion completed in 6.12770681s • [SLOW TEST:10.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:39:57.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 11:39:57.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-bkqrz" to be "success or failure" Jun 22 11:39:57.360: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026037ms Jun 22 11:39:59.364: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010246584s Jun 22 11:40:01.368: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013808754s Jun 22 11:40:03.392: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037997435s Jun 22 11:40:05.397: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043009943s Jun 22 11:40:07.415: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 10.060695174s Jun 22 11:40:09.418: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.063780888s STEP: Saw pod success Jun 22 11:40:09.418: INFO: Pod "downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:40:09.419: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 11:40:09.550: INFO: Waiting for pod downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:40:09.567: INFO: Pod downwardapi-volume-191b7e84-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:40:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bkqrz" for this suite. Jun 22 11:40:15.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:40:15.958: INFO: namespace: e2e-tests-downward-api-bkqrz, resource: bindings, ignored listing per whitelist Jun 22 11:40:15.987: INFO: namespace e2e-tests-downward-api-bkqrz deletion completed in 6.415742882s • [SLOW TEST:18.785 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:40:15.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:40:31.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-cb46m" for this suite. Jun 22 11:41:13.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:41:13.315: INFO: namespace: e2e-tests-kubelet-test-cb46m, resource: bindings, ignored listing per whitelist Jun 22 11:41:13.383: INFO: namespace e2e-tests-kubelet-test-cb46m deletion completed in 42.143750838s • [SLOW TEST:57.396 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:41:13.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 22 11:41:20.011: INFO: Successfully updated pod "annotationupdate467c9c36-b47d-11ea-8cd8-0242ac11001b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:41:22.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h6whx" for this suite. Jun 22 11:41:38.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:41:38.240: INFO: namespace: e2e-tests-projected-h6whx, resource: bindings, ignored listing per whitelist Jun 22 11:41:38.275: INFO: namespace e2e-tests-projected-h6whx deletion completed in 16.232227546s • [SLOW TEST:24.892 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:41:38.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:41:38.439: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 22 11:41:43.467: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 11:41:43.467: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 22 11:41:45.472: INFO: Creating deployment "test-rollover-deployment" Jun 22 11:41:45.531: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 22 11:41:47.552: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 22 11:41:47.557: INFO: Ensure that both replica sets have 1 created replica Jun 22 11:41:47.562: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 22 11:41:47.568: INFO: Updating deployment test-rollover-deployment Jun 22 11:41:47.568: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 22 11:41:49.772: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 22 11:41:49.778: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 22 11:41:49.784: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:41:49.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422907, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:41:51.870: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:41:51.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422907, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:41:53.792: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:41:53.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422912, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:41:55.794: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:41:55.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422912, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:41:57.793: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:41:57.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422912, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:41:59.792: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:41:59.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422912, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:42:01.791: INFO: all replica sets need to contain the pod-template-hash label Jun 22 11:42:01.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422906, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422912, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728422905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:42:03.791: INFO: Jun 22 11:42:03.791: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 22 11:42:03.800: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-n4c4d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n4c4d/deployments/test-rollover-deployment,UID:598fed49-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289348,Generation:2,CreationTimestamp:2020-06-22 11:41:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-22 11:41:46 +0000 UTC 2020-06-22 11:41:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-22 11:42:02 +0000 UTC 2020-06-22 11:41:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 22 11:42:03.803: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-n4c4d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n4c4d/replicasets/test-rollover-deployment-5b8479fdb6,UID:5acfd6c5-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289339,Generation:2,CreationTimestamp:2020-06-22 11:41:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 598fed49-b47d-11ea-99e8-0242ac110002 0xc0012980d7 0xc0012980d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 22 11:42:03.803: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 22 11:42:03.803: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-n4c4d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n4c4d/replicasets/test-rollover-controller,UID:55593f51-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289347,Generation:2,CreationTimestamp:2020-06-22 11:41:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 598fed49-b47d-11ea-99e8-0242ac110002 0xc00122bbe7 0xc00122bbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 11:42:03.804: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-n4c4d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-n4c4d/replicasets/test-rollover-deployment-58494b7559,UID:59ad595f-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289297,Generation:2,CreationTimestamp:2020-06-22 11:41:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 598fed49-b47d-11ea-99e8-0242ac110002 0xc001298007 0xc001298008}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 11:42:03.807: INFO: Pod "test-rollover-deployment-5b8479fdb6-tmh4c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-tmh4c,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-n4c4d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-n4c4d/pods/test-rollover-deployment-5b8479fdb6-tmh4c,UID:5ae9c183-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289317,Generation:0,CreationTimestamp:2020-06-22 11:41:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 5acfd6c5-b47d-11ea-99e8-0242ac110002 0xc000f8dfa7 0xc000f8dfa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4wmct {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4wmct,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4wmct true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b3c050} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b3c070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:41:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:41:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:41:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:41:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.114,StartTime:2020-06-22 11:41:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-22 11:41:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8889c6708cfbcc8bd0d7cc890db949c76c0feaa3c3a930b6ddce802ab6c44404}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:42:03.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-n4c4d" for this suite. Jun 22 11:42:11.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:42:11.870: INFO: namespace: e2e-tests-deployment-n4c4d, resource: bindings, ignored listing per whitelist Jun 22 11:42:11.909: INFO: namespace e2e-tests-deployment-n4c4d deletion completed in 8.097943468s • [SLOW TEST:33.634 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:42:11.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jun 22 11:42:12.012: INFO: Waiting up to 5m0s for pod "pod-695ee315-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-knnkn" to be "success or failure" Jun 22 11:42:12.048: INFO: Pod "pod-695ee315-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.910015ms Jun 22 11:42:14.054: INFO: Pod "pod-695ee315-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042196155s Jun 22 11:42:16.058: INFO: Pod "pod-695ee315-b47d-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.045912292s Jun 22 11:42:18.060: INFO: Pod "pod-695ee315-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048178263s STEP: Saw pod success Jun 22 11:42:18.060: INFO: Pod "pod-695ee315-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:42:18.062: INFO: Trying to get logs from node hunter-worker pod pod-695ee315-b47d-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:42:18.083: INFO: Waiting for pod pod-695ee315-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:42:18.100: INFO: Pod pod-695ee315-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:42:18.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-knnkn" for this suite. Jun 22 11:42:24.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:42:24.152: INFO: namespace: e2e-tests-emptydir-knnkn, resource: bindings, ignored listing per whitelist Jun 22 11:42:24.226: INFO: namespace e2e-tests-emptydir-knnkn deletion completed in 6.122552825s • [SLOW TEST:12.317 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:42:24.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-70bf6b5e-b47d-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:42:24.388: INFO: Waiting up to 5m0s for pod "pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-vlxpw" to be "success or failure" Jun 22 11:42:24.398: INFO: Pod "pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.164449ms Jun 22 11:42:26.412: INFO: Pod "pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02434673s Jun 22 11:42:28.416: INFO: Pod "pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028218909s Jun 22 11:42:30.419: INFO: Pod "pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031712356s STEP: Saw pod success Jun 22 11:42:30.419: INFO: Pod "pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:42:30.422: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 11:42:30.474: INFO: Waiting for pod pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:42:30.488: INFO: Pod pod-configmaps-70c09ad1-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:42:30.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vlxpw" for this suite. Jun 22 11:42:36.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:42:36.537: INFO: namespace: e2e-tests-configmap-vlxpw, resource: bindings, ignored listing per whitelist Jun 22 11:42:36.580: INFO: namespace e2e-tests-configmap-vlxpw deletion completed in 6.089490137s • [SLOW TEST:12.354 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:42:36.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 22 11:42:36.707: INFO: Waiting up to 5m0s for pod "pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-tft5k" to be "success or failure" Jun 22 11:42:36.723: INFO: Pod "pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.603494ms Jun 22 11:42:38.728: INFO: Pod "pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02094858s Jun 22 11:42:40.732: INFO: Pod "pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.02541575s Jun 22 11:42:42.736: INFO: Pod "pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029447872s STEP: Saw pod success Jun 22 11:42:42.736: INFO: Pod "pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:42:42.739: INFO: Trying to get logs from node hunter-worker pod pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:42:42.759: INFO: Waiting for pod pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:42:42.765: INFO: Pod pod-7817c3a8-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:42:42.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tft5k" for this suite. Jun 22 11:42:48.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:42:48.818: INFO: namespace: e2e-tests-emptydir-tft5k, resource: bindings, ignored listing per whitelist Jun 22 11:42:48.834: INFO: namespace e2e-tests-emptydir-tft5k deletion completed in 6.066804033s • [SLOW TEST:12.253 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:42:48.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:43:48.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9b7f9" for this suite. Jun 22 11:44:10.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:44:11.058: INFO: namespace: e2e-tests-container-probe-9b7f9, resource: bindings, ignored listing per whitelist Jun 22 11:44:11.058: INFO: namespace e2e-tests-container-probe-9b7f9 deletion completed in 22.09435246s • [SLOW TEST:82.225 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:44:11.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 22 11:44:11.182: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 11:44:11.204: INFO: Waiting for terminating namespaces to be deleted... Jun 22 11:44:11.207: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 22 11:44:11.214: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 22 11:44:11.214: INFO: Container coredns ready: true, restart count 0 Jun 22 11:44:11.214: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 22 11:44:11.214: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 11:44:11.214: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:44:11.214: INFO: Container kindnet-cni ready: true, restart count 0 Jun 22 11:44:11.214: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 22 11:44:11.219: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:44:11.219: INFO: Container kindnet-cni ready: true, restart count 0 Jun 22 11:44:11.219: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 22 11:44:11.219: INFO: Container coredns ready: true, restart count 0 Jun 22 11:44:11.219: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:44:11.219: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161adb094da9c53f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:44:12.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-cbbgg" for this suite. Jun 22 11:44:18.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:44:18.384: INFO: namespace: e2e-tests-sched-pred-cbbgg, resource: bindings, ignored listing per whitelist Jun 22 11:44:18.407: INFO: namespace e2e-tests-sched-pred-cbbgg deletion completed in 6.119218972s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.349 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:44:18.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:44:18.563: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.816398ms) Jun 22 11:44:18.566: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.408806ms) Jun 22 11:44:18.570: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.259794ms) Jun 22 11:44:18.573: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.413187ms) Jun 22 11:44:18.577: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.383348ms) Jun 22 11:44:18.580: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.826713ms) Jun 22 11:44:18.584: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.182417ms) Jun 22 11:44:18.587: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.437576ms) Jun 22 11:44:18.590: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.134697ms) Jun 22 11:44:18.594: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.275686ms) Jun 22 11:44:18.597: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.069825ms) Jun 22 11:44:18.600: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.890556ms) Jun 22 11:44:18.603: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.026339ms) Jun 22 11:44:18.607: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.769755ms) Jun 22 11:44:18.609: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.935356ms) Jun 22 11:44:18.612: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.827077ms) Jun 22 11:44:18.615: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.921169ms) Jun 22 11:44:18.618: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.991853ms) Jun 22 11:44:18.621: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.801274ms) Jun 22 11:44:18.624: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.678516ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:44:18.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-gtwst" for this suite. Jun 22 11:44:24.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:44:24.685: INFO: namespace: e2e-tests-proxy-gtwst, resource: bindings, ignored listing per whitelist Jun 22 11:44:24.720: INFO: namespace e2e-tests-proxy-gtwst deletion completed in 6.092274239s • [SLOW TEST:6.312 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:44:24.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 22 11:44:24.864: INFO: Waiting up to 5m0s for pod "pod-b8902520-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-cmc7r" to be "success or failure" Jun 22 11:44:24.868: INFO: Pod "pod-b8902520-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199316ms Jun 22 11:44:26.872: INFO: Pod "pod-b8902520-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008092733s Jun 22 11:44:28.875: INFO: Pod "pod-b8902520-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011557978s STEP: Saw pod success Jun 22 11:44:28.875: INFO: Pod "pod-b8902520-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:44:28.878: INFO: Trying to get logs from node hunter-worker pod pod-b8902520-b47d-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:44:29.099: INFO: Waiting for pod pod-b8902520-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:44:29.190: INFO: Pod pod-b8902520-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:44:29.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cmc7r" for this suite. Jun 22 11:44:35.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:44:35.362: INFO: namespace: e2e-tests-emptydir-cmc7r, resource: bindings, ignored listing per whitelist Jun 22 11:44:35.382: INFO: namespace e2e-tests-emptydir-cmc7r deletion completed in 6.186671739s • [SLOW TEST:10.662 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:44:35.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:44:35.507: INFO: Creating deployment "test-recreate-deployment" Jun 22 11:44:35.511: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 22 11:44:35.519: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jun 22 11:44:37.526: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 22 11:44:37.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728423075, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728423075, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728423075, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728423075, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 11:44:39.532: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 22 11:44:39.538: INFO: Updating deployment test-recreate-deployment Jun 22 11:44:39.538: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 22 11:44:39.845: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-4cgtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4cgtg/deployments/test-recreate-deployment,UID:bee92bbc-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289881,Generation:2,CreationTimestamp:2020-06-22 11:44:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-22 11:44:39 +0000 UTC 2020-06-22 11:44:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-22 11:44:39 +0000 UTC 2020-06-22 11:44:35 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 22 11:44:39.849: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-4cgtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4cgtg/replicasets/test-recreate-deployment-589c4bfd,UID:c1602e41-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289880,Generation:1,CreationTimestamp:2020-06-22 11:44:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bee92bbc-b47d-11ea-99e8-0242ac110002 0xc0026d6c2f 0xc0026d6c40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 11:44:39.849: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 22 11:44:39.849: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-4cgtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4cgtg/replicasets/test-recreate-deployment-5bf7f65dc,UID:beead9e5-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289869,Generation:2,CreationTimestamp:2020-06-22 11:44:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bee92bbc-b47d-11ea-99e8-0242ac110002 0xc0026d6d10 0xc0026d6d11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 11:44:39.853: INFO: Pod "test-recreate-deployment-589c4bfd-ddwtd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ddwtd,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-4cgtg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4cgtg/pods/test-recreate-deployment-589c4bfd-ddwtd,UID:c160bb33-b47d-11ea-99e8-0242ac110002,ResourceVersion:17289879,Generation:0,CreationTimestamp:2020-06-22 11:44:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd c1602e41-b47d-11ea-99e8-0242ac110002 0xc001c8e96f 0xc001c8e980}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-78ssm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-78ssm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-78ssm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c8e9f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c8ea10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:44:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:44:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:44:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 11:44:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-22 11:44:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:44:39.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-4cgtg" for this suite. Jun 22 11:44:45.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:44:46.016: INFO: namespace: e2e-tests-deployment-4cgtg, resource: bindings, ignored listing per whitelist Jun 22 11:44:46.043: INFO: namespace e2e-tests-deployment-4cgtg deletion completed in 6.18744357s • [SLOW TEST:10.662 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:44:46.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jun 22 11:44:46.695: INFO: Waiting up to 5m0s for pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf" in namespace "e2e-tests-svcaccounts-7c7vm" to be "success or failure" Jun 22 11:44:46.702: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382578ms Jun 22 11:44:48.705: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010122813s Jun 22 11:44:50.726: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030944922s Jun 22 11:44:52.730: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035194387s STEP: Saw pod success Jun 22 11:44:52.730: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf" satisfied condition "success or failure" Jun 22 11:44:52.733: INFO: Trying to get logs from node hunter-worker pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf container token-test: STEP: delete the pod Jun 22 11:44:52.751: INFO: Waiting for pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf to disappear Jun 22 11:44:52.755: INFO: Pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8qtpf no longer exists STEP: Creating a pod to test consume service account root CA Jun 22 11:44:52.759: INFO: Waiting up to 5m0s for pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h" in namespace "e2e-tests-svcaccounts-7c7vm" to be "success or failure" Jun 22 11:44:52.772: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 13.361291ms Jun 22 11:44:54.775: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016895492s Jun 22 11:44:56.846: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087342808s Jun 22 11:44:58.850: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091708137s Jun 22 11:45:00.990: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.23125672s STEP: Saw pod success Jun 22 11:45:00.990: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h" satisfied condition "success or failure" Jun 22 11:45:00.992: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h container root-ca-test: STEP: delete the pod Jun 22 11:45:01.163: INFO: Waiting for pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h to disappear Jun 22 11:45:01.223: INFO: Pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-8lk7h no longer exists STEP: Creating a pod to test consume service account namespace Jun 22 11:45:01.226: INFO: Waiting up to 5m0s for pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27" in namespace "e2e-tests-svcaccounts-7c7vm" to be "success or failure" Jun 22 11:45:01.470: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27": Phase="Pending", Reason="", readiness=false. Elapsed: 243.859235ms Jun 22 11:45:03.473: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24696293s Jun 22 11:45:05.529: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303113686s Jun 22 11:45:07.532: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30595822s Jun 22 11:45:09.537: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.310770415s STEP: Saw pod success Jun 22 11:45:09.537: INFO: Pod "pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27" satisfied condition "success or failure" Jun 22 11:45:09.540: INFO: Trying to get logs from node hunter-worker pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27 container namespace-test: STEP: delete the pod Jun 22 11:45:09.560: INFO: Waiting for pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27 to disappear Jun 22 11:45:09.578: INFO: Pod pod-service-account-c593449c-b47d-11ea-8cd8-0242ac11001b-9zn27 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:45:09.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-7c7vm" for this suite. Jun 22 11:45:15.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:45:15.667: INFO: namespace: e2e-tests-svcaccounts-7c7vm, resource: bindings, ignored listing per whitelist Jun 22 11:45:15.696: INFO: namespace e2e-tests-svcaccounts-7c7vm deletion completed in 6.11484139s • [SLOW TEST:29.652 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:45:15.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d6e6f463-b47d-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:45:15.808: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-tldl2" to be "success or failure" Jun 22 11:45:15.811: INFO: Pod "pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452553ms Jun 22 11:45:17.815: INFO: Pod "pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006959273s Jun 22 11:45:19.819: INFO: Pod "pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011077594s STEP: Saw pod success Jun 22 11:45:19.819: INFO: Pod "pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:45:19.822: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 11:45:19.916: INFO: Waiting for pod pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:45:19.924: INFO: Pod pod-configmaps-d6e980b6-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:45:19.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tldl2" for this suite. Jun 22 11:45:25.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:45:25.972: INFO: namespace: e2e-tests-configmap-tldl2, resource: bindings, ignored listing per whitelist Jun 22 11:45:26.094: INFO: namespace e2e-tests-configmap-tldl2 deletion completed in 6.166191522s • [SLOW TEST:10.398 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:45:26.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:45:26.213: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:45:30.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r7xg7" for this suite. Jun 22 11:46:12.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:46:12.415: INFO: namespace: e2e-tests-pods-r7xg7, resource: bindings, ignored listing per whitelist Jun 22 11:46:12.446: INFO: namespace e2e-tests-pods-r7xg7 deletion completed in 42.12675296s • [SLOW TEST:46.352 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:46:12.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 11:46:12.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-qhg97" to be "success or failure" Jun 22 11:46:12.716: INFO: Pod "downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.780418ms Jun 22 11:46:14.722: INFO: Pod "downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051794197s Jun 22 11:46:16.726: INFO: Pod "downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.055652739s Jun 22 11:46:18.731: INFO: Pod "downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060941247s STEP: Saw pod success Jun 22 11:46:18.731: INFO: Pod "downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:46:18.735: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 11:46:18.807: INFO: Waiting for pod downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b to disappear Jun 22 11:46:18.841: INFO: Pod downwardapi-volume-f8cc554e-b47d-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:46:18.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qhg97" for this suite. Jun 22 11:46:24.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:46:24.889: INFO: namespace: e2e-tests-projected-qhg97, resource: bindings, ignored listing per whitelist Jun 22 11:46:24.930: INFO: namespace e2e-tests-projected-qhg97 deletion completed in 6.085496114s • [SLOW TEST:12.484 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:46:24.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9tz89 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 11:46:25.011: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 11:46:51.175: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.120:8080/dial?request=hostName&protocol=http&host=10.244.2.119&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9tz89 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 11:46:51.175: INFO: >>> kubeConfig: /root/.kube/config I0622 11:46:51.206344 7 log.go:172] (0xc000e40420) (0xc000a72460) Create stream I0622 11:46:51.206391 7 log.go:172] (0xc000e40420) (0xc000a72460) Stream added, broadcasting: 1 I0622 11:46:51.208351 7 log.go:172] (0xc000e40420) Reply frame received for 1 I0622 11:46:51.208395 7 log.go:172] (0xc000e40420) (0xc000a72500) Create stream I0622 11:46:51.208416 7 log.go:172] (0xc000e40420) (0xc000a72500) Stream added, broadcasting: 3 I0622 11:46:51.209892 7 log.go:172] (0xc000e40420) Reply frame received for 3 I0622 11:46:51.209945 7 log.go:172] (0xc000e40420) (0xc00259a0a0) Create stream I0622 11:46:51.209978 7 log.go:172] (0xc000e40420) (0xc00259a0a0) Stream added, broadcasting: 5 I0622 11:46:51.210910 7 log.go:172] (0xc000e40420) Reply frame received for 5 I0622 11:46:51.442874 7 log.go:172] (0xc000e40420) Data frame received for 3 I0622 11:46:51.442920 7 log.go:172] (0xc000a72500) (3) Data frame handling I0622 11:46:51.442959 7 log.go:172] (0xc000a72500) (3) Data frame sent I0622 11:46:51.443624 7 log.go:172] (0xc000e40420) Data frame received for 3 I0622 11:46:51.443656 7 log.go:172] (0xc000a72500) (3) Data frame handling I0622 11:46:51.443708 7 log.go:172] (0xc000e40420) Data frame received for 5 I0622 11:46:51.443730 7 log.go:172] (0xc00259a0a0) (5) Data frame handling I0622 11:46:51.445519 7 log.go:172] (0xc000e40420) Data frame received for 1 I0622 11:46:51.445543 7 log.go:172] (0xc000a72460) (1) Data frame handling I0622 11:46:51.445562 7 log.go:172] (0xc000a72460) (1) Data frame sent I0622 11:46:51.445584 7 log.go:172] (0xc000e40420) (0xc000a72460) Stream removed, broadcasting: 1 I0622 11:46:51.445675 7 log.go:172] (0xc000e40420) (0xc000a72460) Stream removed, broadcasting: 1 I0622 11:46:51.445690 7 log.go:172] (0xc000e40420) (0xc000a72500) Stream removed, broadcasting: 3 I0622 11:46:51.445703 7 log.go:172] (0xc000e40420) (0xc00259a0a0) Stream removed, broadcasting: 5 Jun 22 11:46:51.445: INFO: Waiting for endpoints: map[] I0622 11:46:51.446073 7 log.go:172] (0xc000e40420) Go away received Jun 22 11:46:51.489: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.120:8080/dial?request=hostName&protocol=http&host=10.244.1.88&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9tz89 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 11:46:51.489: INFO: >>> kubeConfig: /root/.kube/config I0622 11:46:51.520111 7 log.go:172] (0xc000e12790) (0xc00259a320) Create stream I0622 11:46:51.520145 7 log.go:172] (0xc000e12790) (0xc00259a320) Stream added, broadcasting: 1 I0622 11:46:51.522424 7 log.go:172] (0xc000e12790) Reply frame received for 1 I0622 11:46:51.522469 7 log.go:172] (0xc000e12790) (0xc000a725a0) Create stream I0622 11:46:51.522480 7 log.go:172] (0xc000e12790) (0xc000a725a0) Stream added, broadcasting: 3 I0622 11:46:51.523316 7 log.go:172] (0xc000e12790) Reply frame received for 3 I0622 11:46:51.523355 7 log.go:172] (0xc000e12790) (0xc001b2c780) Create stream I0622 11:46:51.523368 7 log.go:172] (0xc000e12790) (0xc001b2c780) Stream added, broadcasting: 5 I0622 11:46:51.524166 7 log.go:172] (0xc000e12790) Reply frame received for 5 I0622 11:46:51.579610 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 11:46:51.579633 7 log.go:172] (0xc000a725a0) (3) Data frame handling I0622 11:46:51.579650 7 log.go:172] (0xc000a725a0) (3) Data frame sent I0622 11:46:51.580392 7 log.go:172] (0xc000e12790) Data frame received for 5 I0622 11:46:51.580413 7 log.go:172] (0xc001b2c780) (5) Data frame handling I0622 11:46:51.580562 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 11:46:51.580578 7 log.go:172] (0xc000a725a0) (3) Data frame handling I0622 11:46:51.582191 7 log.go:172] (0xc000e12790) Data frame received for 1 I0622 11:46:51.582217 7 log.go:172] (0xc00259a320) (1) Data frame handling I0622 11:46:51.582233 7 log.go:172] (0xc00259a320) (1) Data frame sent I0622 11:46:51.582251 7 log.go:172] (0xc000e12790) (0xc00259a320) Stream removed, broadcasting: 1 I0622 11:46:51.582269 7 log.go:172] (0xc000e12790) Go away received I0622 11:46:51.582371 7 log.go:172] (0xc000e12790) (0xc00259a320) Stream removed, broadcasting: 1 I0622 11:46:51.582389 7 log.go:172] (0xc000e12790) (0xc000a725a0) Stream removed, broadcasting: 3 I0622 11:46:51.582400 7 log.go:172] (0xc000e12790) (0xc001b2c780) Stream removed, broadcasting: 5 Jun 22 11:46:51.582: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:46:51.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9tz89" for this suite. Jun 22 11:47:16.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:47:16.485: INFO: namespace: e2e-tests-pod-network-test-9tz89, resource: bindings, ignored listing per whitelist Jun 22 11:47:16.504: INFO: namespace e2e-tests-pod-network-test-9tz89 deletion completed in 24.917806244s • [SLOW TEST:51.573 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:47:16.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1f4ae362-b47e-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 11:47:17.307: INFO: Waiting up to 5m0s for pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-dslcl" to be "success or failure" Jun 22 11:47:17.582: INFO: Pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 274.704638ms Jun 22 11:47:19.746: INFO: Pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439563446s Jun 22 11:47:21.974: INFO: Pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.667462645s Jun 22 11:47:24.047: INFO: Pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739775587s Jun 22 11:47:26.190: INFO: Pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.883392857s STEP: Saw pod success Jun 22 11:47:26.190: INFO: Pod "pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:47:26.391: INFO: Trying to get logs from node hunter-worker pod pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b container secret-env-test: STEP: delete the pod Jun 22 11:47:26.604: INFO: Waiting for pod pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:47:27.058: INFO: Pod pod-secrets-1f56af95-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:47:27.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dslcl" for this suite. Jun 22 11:47:33.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:47:33.303: INFO: namespace: e2e-tests-secrets-dslcl, resource: bindings, ignored listing per whitelist Jun 22 11:47:33.351: INFO: namespace e2e-tests-secrets-dslcl deletion completed in 6.288180411s • [SLOW TEST:16.847 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:47:33.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 22 11:47:33.755: INFO: Waiting up to 5m0s for pod "pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-hd4hk" to be "success or failure" Jun 22 11:47:33.771: INFO: Pod "pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.491182ms Jun 22 11:47:35.775: INFO: Pod "pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019450048s Jun 22 11:47:37.779: INFO: Pod "pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023668939s STEP: Saw pod success Jun 22 11:47:37.779: INFO: Pod "pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:47:37.783: INFO: Trying to get logs from node hunter-worker pod pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:47:37.999: INFO: Waiting for pod pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:47:38.023: INFO: Pod pod-2926a4e0-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:47:38.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hd4hk" for this suite. Jun 22 11:47:44.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:47:44.143: INFO: namespace: e2e-tests-emptydir-hd4hk, resource: bindings, ignored listing per whitelist Jun 22 11:47:44.206: INFO: namespace e2e-tests-emptydir-hd4hk deletion completed in 6.180255649s • [SLOW TEST:10.854 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:47:44.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-r7n8f [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-r7n8f STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-r7n8f Jun 22 11:47:44.741: INFO: Found 0 stateful pods, waiting for 1 Jun 22 11:47:54.745: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 22 11:47:54.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:47:54.975: INFO: stderr: "I0622 11:47:54.866701 1928 log.go:172] (0xc00089e2c0) (0xc00036b5e0) Create stream\nI0622 11:47:54.866797 1928 log.go:172] (0xc00089e2c0) (0xc00036b5e0) Stream added, broadcasting: 1\nI0622 11:47:54.868865 1928 log.go:172] (0xc00089e2c0) Reply frame received for 1\nI0622 11:47:54.868891 1928 log.go:172] (0xc00089e2c0) (0xc00030e000) Create stream\nI0622 11:47:54.868898 1928 log.go:172] (0xc00089e2c0) (0xc00030e000) Stream added, broadcasting: 3\nI0622 11:47:54.869868 1928 log.go:172] (0xc00089e2c0) Reply frame received for 3\nI0622 11:47:54.869909 1928 log.go:172] (0xc00089e2c0) (0xc00036b680) Create stream\nI0622 11:47:54.869922 1928 log.go:172] (0xc00089e2c0) (0xc00036b680) Stream added, broadcasting: 5\nI0622 11:47:54.870641 1928 log.go:172] (0xc00089e2c0) Reply frame received for 5\nI0622 11:47:54.968378 1928 log.go:172] (0xc00089e2c0) Data frame received for 5\nI0622 11:47:54.968398 1928 log.go:172] (0xc00036b680) (5) Data frame handling\nI0622 11:47:54.968423 1928 log.go:172] (0xc00089e2c0) Data frame received for 3\nI0622 11:47:54.968438 1928 log.go:172] (0xc00030e000) (3) Data frame handling\nI0622 11:47:54.968447 1928 log.go:172] (0xc00030e000) (3) Data frame sent\nI0622 11:47:54.968452 1928 log.go:172] (0xc00089e2c0) Data frame received for 3\nI0622 11:47:54.968456 1928 log.go:172] (0xc00030e000) (3) Data frame handling\nI0622 11:47:54.970610 1928 log.go:172] (0xc00089e2c0) Data frame received for 1\nI0622 11:47:54.970635 1928 log.go:172] (0xc00036b5e0) (1) Data frame handling\nI0622 11:47:54.970651 1928 log.go:172] (0xc00036b5e0) (1) Data frame sent\nI0622 11:47:54.970675 1928 log.go:172] (0xc00089e2c0) (0xc00036b5e0) Stream removed, broadcasting: 1\nI0622 11:47:54.970755 1928 log.go:172] (0xc00089e2c0) Go away received\nI0622 11:47:54.970826 1928 log.go:172] (0xc00089e2c0) (0xc00036b5e0) Stream removed, broadcasting: 1\nI0622 11:47:54.970865 1928 log.go:172] (0xc00089e2c0) (0xc00030e000) Stream removed, broadcasting: 3\nI0622 11:47:54.970876 1928 log.go:172] (0xc00089e2c0) (0xc00036b680) Stream removed, broadcasting: 5\n" Jun 22 11:47:54.975: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:47:54.975: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:47:54.978: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 22 11:48:04.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:48:04.983: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:48:05.000: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999623s Jun 22 11:48:06.004: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993527844s Jun 22 11:48:07.010: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988537624s Jun 22 11:48:08.014: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983563856s Jun 22 11:48:09.018: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979651921s Jun 22 11:48:10.023: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974986138s Jun 22 11:48:11.028: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970447408s Jun 22 11:48:12.033: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965259818s Jun 22 11:48:13.038: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.960059862s Jun 22 11:48:14.043: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.118373ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-r7n8f Jun 22 11:48:15.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:48:15.244: INFO: stderr: "I0622 11:48:15.177029 1950 log.go:172] (0xc00015c840) (0xc00076a640) Create stream\nI0622 11:48:15.177096 1950 log.go:172] (0xc00015c840) (0xc00076a640) Stream added, broadcasting: 1\nI0622 11:48:15.180198 1950 log.go:172] (0xc00015c840) Reply frame received for 1\nI0622 11:48:15.180254 1950 log.go:172] (0xc00015c840) (0xc00067cb40) Create stream\nI0622 11:48:15.180268 1950 log.go:172] (0xc00015c840) (0xc00067cb40) Stream added, broadcasting: 3\nI0622 11:48:15.181100 1950 log.go:172] (0xc00015c840) Reply frame received for 3\nI0622 11:48:15.181329 1950 log.go:172] (0xc00015c840) (0xc0007ce000) Create stream\nI0622 11:48:15.181355 1950 log.go:172] (0xc00015c840) (0xc0007ce000) Stream added, broadcasting: 5\nI0622 11:48:15.182365 1950 log.go:172] (0xc00015c840) Reply frame received for 5\nI0622 11:48:15.237534 1950 log.go:172] (0xc00015c840) Data frame received for 5\nI0622 11:48:15.237571 1950 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0622 11:48:15.237590 1950 log.go:172] (0xc00015c840) Data frame received for 3\nI0622 11:48:15.237597 1950 log.go:172] (0xc00067cb40) (3) Data frame handling\nI0622 11:48:15.237611 1950 log.go:172] (0xc00067cb40) (3) Data frame sent\nI0622 11:48:15.237617 1950 log.go:172] (0xc00015c840) Data frame received for 3\nI0622 11:48:15.237622 1950 log.go:172] (0xc00067cb40) (3) Data frame handling\nI0622 11:48:15.238948 1950 log.go:172] (0xc00015c840) Data frame received for 1\nI0622 11:48:15.238963 1950 log.go:172] (0xc00076a640) (1) Data frame handling\nI0622 11:48:15.238971 1950 log.go:172] (0xc00076a640) (1) Data frame sent\nI0622 11:48:15.239037 1950 log.go:172] (0xc00015c840) (0xc00076a640) Stream removed, broadcasting: 1\nI0622 11:48:15.239062 1950 log.go:172] (0xc00015c840) Go away received\nI0622 11:48:15.239338 1950 log.go:172] (0xc00015c840) (0xc00076a640) Stream removed, broadcasting: 1\nI0622 11:48:15.239367 1950 log.go:172] (0xc00015c840) (0xc00067cb40) Stream removed, broadcasting: 3\nI0622 11:48:15.239384 1950 log.go:172] (0xc00015c840) (0xc0007ce000) Stream removed, broadcasting: 5\n" Jun 22 11:48:15.245: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:48:15.245: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:48:15.248: INFO: Found 1 stateful pods, waiting for 3 Jun 22 11:48:25.259: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:48:25.259: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:48:25.259: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 22 11:48:35.253: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:48:35.253: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 11:48:35.253: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 22 11:48:35.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:48:35.447: INFO: stderr: "I0622 11:48:35.378705 1973 log.go:172] (0xc000138790) (0xc000693400) Create stream\nI0622 11:48:35.378756 1973 log.go:172] (0xc000138790) (0xc000693400) Stream added, broadcasting: 1\nI0622 11:48:35.381574 1973 log.go:172] (0xc000138790) Reply frame received for 1\nI0622 11:48:35.381623 1973 log.go:172] (0xc000138790) (0xc000762000) Create stream\nI0622 11:48:35.381641 1973 log.go:172] (0xc000138790) (0xc000762000) Stream added, broadcasting: 3\nI0622 11:48:35.382485 1973 log.go:172] (0xc000138790) Reply frame received for 3\nI0622 11:48:35.382514 1973 log.go:172] (0xc000138790) (0xc0006934a0) Create stream\nI0622 11:48:35.382523 1973 log.go:172] (0xc000138790) (0xc0006934a0) Stream added, broadcasting: 5\nI0622 11:48:35.383276 1973 log.go:172] (0xc000138790) Reply frame received for 5\nI0622 11:48:35.439846 1973 log.go:172] (0xc000138790) Data frame received for 5\nI0622 11:48:35.439874 1973 log.go:172] (0xc0006934a0) (5) Data frame handling\nI0622 11:48:35.439890 1973 log.go:172] (0xc000138790) Data frame received for 3\nI0622 11:48:35.439895 1973 log.go:172] (0xc000762000) (3) Data frame handling\nI0622 11:48:35.439900 1973 log.go:172] (0xc000762000) (3) Data frame sent\nI0622 11:48:35.439905 1973 log.go:172] (0xc000138790) Data frame received for 3\nI0622 11:48:35.439911 1973 log.go:172] (0xc000762000) (3) Data frame handling\nI0622 11:48:35.441418 1973 log.go:172] (0xc000138790) Data frame received for 1\nI0622 11:48:35.441441 1973 log.go:172] (0xc000693400) (1) Data frame handling\nI0622 11:48:35.441456 1973 log.go:172] (0xc000693400) (1) Data frame sent\nI0622 11:48:35.441473 1973 log.go:172] (0xc000138790) (0xc000693400) Stream removed, broadcasting: 1\nI0622 11:48:35.441486 1973 log.go:172] (0xc000138790) Go away received\nI0622 11:48:35.441736 1973 log.go:172] (0xc000138790) (0xc000693400) Stream removed, broadcasting: 1\nI0622 11:48:35.441752 1973 log.go:172] (0xc000138790) (0xc000762000) Stream removed, broadcasting: 3\nI0622 11:48:35.441759 1973 log.go:172] (0xc000138790) (0xc0006934a0) Stream removed, broadcasting: 5\n" Jun 22 11:48:35.447: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:48:35.447: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:48:35.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:48:35.714: INFO: stderr: "I0622 11:48:35.559971 1995 log.go:172] (0xc0008582c0) (0xc000680640) Create stream\nI0622 11:48:35.560025 1995 log.go:172] (0xc0008582c0) (0xc000680640) Stream added, broadcasting: 1\nI0622 11:48:35.562202 1995 log.go:172] (0xc0008582c0) Reply frame received for 1\nI0622 11:48:35.562243 1995 log.go:172] (0xc0008582c0) (0xc0001fcdc0) Create stream\nI0622 11:48:35.562253 1995 log.go:172] (0xc0008582c0) (0xc0001fcdc0) Stream added, broadcasting: 3\nI0622 11:48:35.563402 1995 log.go:172] (0xc0008582c0) Reply frame received for 3\nI0622 11:48:35.563431 1995 log.go:172] (0xc0008582c0) (0xc0001fcf00) Create stream\nI0622 11:48:35.563443 1995 log.go:172] (0xc0008582c0) (0xc0001fcf00) Stream added, broadcasting: 5\nI0622 11:48:35.564274 1995 log.go:172] (0xc0008582c0) Reply frame received for 5\nI0622 11:48:35.706992 1995 log.go:172] (0xc0008582c0) Data frame received for 3\nI0622 11:48:35.707047 1995 log.go:172] (0xc0001fcdc0) (3) Data frame handling\nI0622 11:48:35.707084 1995 log.go:172] (0xc0001fcdc0) (3) Data frame sent\nI0622 11:48:35.707265 1995 log.go:172] (0xc0008582c0) Data frame received for 3\nI0622 11:48:35.707287 1995 log.go:172] (0xc0001fcdc0) (3) Data frame handling\nI0622 11:48:35.707316 1995 log.go:172] (0xc0008582c0) Data frame received for 5\nI0622 11:48:35.707340 1995 log.go:172] (0xc0001fcf00) (5) Data frame handling\nI0622 11:48:35.709355 1995 log.go:172] (0xc0008582c0) Data frame received for 1\nI0622 11:48:35.709395 1995 log.go:172] (0xc000680640) (1) Data frame handling\nI0622 11:48:35.709422 1995 log.go:172] (0xc000680640) (1) Data frame sent\nI0622 11:48:35.709535 1995 log.go:172] (0xc0008582c0) (0xc000680640) Stream removed, broadcasting: 1\nI0622 11:48:35.709588 1995 log.go:172] (0xc0008582c0) Go away received\nI0622 11:48:35.709814 1995 log.go:172] (0xc0008582c0) (0xc000680640) Stream removed, broadcasting: 1\nI0622 11:48:35.709839 1995 log.go:172] (0xc0008582c0) (0xc0001fcdc0) Stream removed, broadcasting: 3\nI0622 11:48:35.709848 1995 log.go:172] (0xc0008582c0) (0xc0001fcf00) Stream removed, broadcasting: 5\n" Jun 22 11:48:35.714: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:48:35.714: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:48:35.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 11:48:35.972: INFO: stderr: "I0622 11:48:35.850055 2018 log.go:172] (0xc000138840) (0xc0006852c0) Create stream\nI0622 11:48:35.850097 2018 log.go:172] (0xc000138840) (0xc0006852c0) Stream added, broadcasting: 1\nI0622 11:48:35.852106 2018 log.go:172] (0xc000138840) Reply frame received for 1\nI0622 11:48:35.852152 2018 log.go:172] (0xc000138840) (0xc0006fa000) Create stream\nI0622 11:48:35.852174 2018 log.go:172] (0xc000138840) (0xc0006fa000) Stream added, broadcasting: 3\nI0622 11:48:35.853019 2018 log.go:172] (0xc000138840) Reply frame received for 3\nI0622 11:48:35.853042 2018 log.go:172] (0xc000138840) (0xc0006fa0a0) Create stream\nI0622 11:48:35.853051 2018 log.go:172] (0xc000138840) (0xc0006fa0a0) Stream added, broadcasting: 5\nI0622 11:48:35.853970 2018 log.go:172] (0xc000138840) Reply frame received for 5\nI0622 11:48:35.962895 2018 log.go:172] (0xc000138840) Data frame received for 3\nI0622 11:48:35.962922 2018 log.go:172] (0xc0006fa000) (3) Data frame handling\nI0622 11:48:35.962937 2018 log.go:172] (0xc0006fa000) (3) Data frame sent\nI0622 11:48:35.963161 2018 log.go:172] (0xc000138840) Data frame received for 5\nI0622 11:48:35.963192 2018 log.go:172] (0xc000138840) Data frame received for 3\nI0622 11:48:35.963220 2018 log.go:172] (0xc0006fa000) (3) Data frame handling\nI0622 11:48:35.963243 2018 log.go:172] (0xc0006fa0a0) (5) Data frame handling\nI0622 11:48:35.964384 2018 log.go:172] (0xc000138840) Data frame received for 1\nI0622 11:48:35.964400 2018 log.go:172] (0xc0006852c0) (1) Data frame handling\nI0622 11:48:35.964405 2018 log.go:172] (0xc0006852c0) (1) Data frame sent\nI0622 11:48:35.964416 2018 log.go:172] (0xc000138840) (0xc0006852c0) Stream removed, broadcasting: 1\nI0622 11:48:35.964429 2018 log.go:172] (0xc000138840) Go away received\nI0622 11:48:35.964600 2018 log.go:172] (0xc000138840) (0xc0006852c0) Stream removed, broadcasting: 1\nI0622 11:48:35.964623 2018 log.go:172] (0xc000138840) (0xc0006fa000) Stream removed, broadcasting: 3\nI0622 11:48:35.964634 2018 log.go:172] (0xc000138840) (0xc0006fa0a0) Stream removed, broadcasting: 5\n" Jun 22 11:48:35.972: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 11:48:35.972: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 11:48:35.972: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:48:35.975: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 22 11:48:45.981: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:48:45.981: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:48:45.981: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 22 11:48:46.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999399s Jun 22 11:48:47.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.822972958s Jun 22 11:48:48.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.816376395s Jun 22 11:48:49.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.811122658s Jun 22 11:48:50.187: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.804455263s Jun 22 11:48:51.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.799180246s Jun 22 11:48:52.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.793938303s Jun 22 11:48:53.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.788754194s Jun 22 11:48:54.207: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.783280101s Jun 22 11:48:55.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 779.244903ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-r7n8f Jun 22 11:48:56.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:48:56.462: INFO: stderr: "I0622 11:48:56.356445 2041 log.go:172] (0xc00014c790) (0xc0005db4a0) Create stream\nI0622 11:48:56.356503 2041 log.go:172] (0xc00014c790) (0xc0005db4a0) Stream added, broadcasting: 1\nI0622 11:48:56.368824 2041 log.go:172] (0xc00014c790) Reply frame received for 1\nI0622 11:48:56.368911 2041 log.go:172] (0xc00014c790) (0xc000590000) Create stream\nI0622 11:48:56.368935 2041 log.go:172] (0xc00014c790) (0xc000590000) Stream added, broadcasting: 3\nI0622 11:48:56.370746 2041 log.go:172] (0xc00014c790) Reply frame received for 3\nI0622 11:48:56.370840 2041 log.go:172] (0xc00014c790) (0xc000344000) Create stream\nI0622 11:48:56.370860 2041 log.go:172] (0xc00014c790) (0xc000344000) Stream added, broadcasting: 5\nI0622 11:48:56.372226 2041 log.go:172] (0xc00014c790) Reply frame received for 5\nI0622 11:48:56.454179 2041 log.go:172] (0xc00014c790) Data frame received for 5\nI0622 11:48:56.454222 2041 log.go:172] (0xc000344000) (5) Data frame handling\nI0622 11:48:56.454253 2041 log.go:172] (0xc00014c790) Data frame received for 3\nI0622 11:48:56.454281 2041 log.go:172] (0xc000590000) (3) Data frame handling\nI0622 11:48:56.454320 2041 log.go:172] (0xc000590000) (3) Data frame sent\nI0622 11:48:56.454336 2041 log.go:172] (0xc00014c790) Data frame received for 3\nI0622 11:48:56.454359 2041 log.go:172] (0xc000590000) (3) Data frame handling\nI0622 11:48:56.455745 2041 log.go:172] (0xc00014c790) Data frame received for 1\nI0622 11:48:56.455779 2041 log.go:172] (0xc0005db4a0) (1) Data frame handling\nI0622 11:48:56.455794 2041 log.go:172] (0xc0005db4a0) (1) Data frame sent\nI0622 11:48:56.455814 2041 log.go:172] (0xc00014c790) (0xc0005db4a0) Stream removed, broadcasting: 1\nI0622 11:48:56.455829 2041 log.go:172] (0xc00014c790) Go away received\nI0622 11:48:56.456045 2041 log.go:172] (0xc00014c790) (0xc0005db4a0) Stream removed, broadcasting: 1\nI0622 11:48:56.456062 2041 log.go:172] (0xc00014c790) (0xc000590000) Stream removed, broadcasting: 3\nI0622 11:48:56.456069 2041 log.go:172] (0xc00014c790) (0xc000344000) Stream removed, broadcasting: 5\n" Jun 22 11:48:56.462: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:48:56.462: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:48:56.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:48:56.659: INFO: stderr: "I0622 11:48:56.589934 2064 log.go:172] (0xc000138580) (0xc000691360) Create stream\nI0622 11:48:56.590001 2064 log.go:172] (0xc000138580) (0xc000691360) Stream added, broadcasting: 1\nI0622 11:48:56.592867 2064 log.go:172] (0xc000138580) Reply frame received for 1\nI0622 11:48:56.593034 2064 log.go:172] (0xc000138580) (0xc000730000) Create stream\nI0622 11:48:56.593262 2064 log.go:172] (0xc000138580) (0xc000730000) Stream added, broadcasting: 3\nI0622 11:48:56.594457 2064 log.go:172] (0xc000138580) Reply frame received for 3\nI0622 11:48:56.594518 2064 log.go:172] (0xc000138580) (0xc000691400) Create stream\nI0622 11:48:56.594535 2064 log.go:172] (0xc000138580) (0xc000691400) Stream added, broadcasting: 5\nI0622 11:48:56.595744 2064 log.go:172] (0xc000138580) Reply frame received for 5\nI0622 11:48:56.654054 2064 log.go:172] (0xc000138580) Data frame received for 5\nI0622 11:48:56.654092 2064 log.go:172] (0xc000691400) (5) Data frame handling\nI0622 11:48:56.654131 2064 log.go:172] (0xc000138580) Data frame received for 3\nI0622 11:48:56.654139 2064 log.go:172] (0xc000730000) (3) Data frame handling\nI0622 11:48:56.654145 2064 log.go:172] (0xc000730000) (3) Data frame sent\nI0622 11:48:56.654150 2064 log.go:172] (0xc000138580) Data frame received for 3\nI0622 11:48:56.654154 2064 log.go:172] (0xc000730000) (3) Data frame handling\nI0622 11:48:56.655291 2064 log.go:172] (0xc000138580) Data frame received for 1\nI0622 11:48:56.655307 2064 log.go:172] (0xc000691360) (1) Data frame handling\nI0622 11:48:56.655315 2064 log.go:172] (0xc000691360) (1) Data frame sent\nI0622 11:48:56.655325 2064 log.go:172] (0xc000138580) (0xc000691360) Stream removed, broadcasting: 1\nI0622 11:48:56.655336 2064 log.go:172] (0xc000138580) Go away received\nI0622 11:48:56.655592 2064 log.go:172] (0xc000138580) (0xc000691360) Stream removed, broadcasting: 1\nI0622 11:48:56.655603 2064 log.go:172] (0xc000138580) (0xc000730000) Stream removed, broadcasting: 3\nI0622 11:48:56.655608 2064 log.go:172] (0xc000138580) (0xc000691400) Stream removed, broadcasting: 5\n" Jun 22 11:48:56.659: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:48:56.659: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:48:56.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r7n8f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 11:48:56.852: INFO: stderr: "I0622 11:48:56.781990 2086 log.go:172] (0xc000138630) (0xc00073a640) Create stream\nI0622 11:48:56.782042 2086 log.go:172] (0xc000138630) (0xc00073a640) Stream added, broadcasting: 1\nI0622 11:48:56.784546 2086 log.go:172] (0xc000138630) Reply frame received for 1\nI0622 11:48:56.784609 2086 log.go:172] (0xc000138630) (0xc0006f8d20) Create stream\nI0622 11:48:56.784630 2086 log.go:172] (0xc000138630) (0xc0006f8d20) Stream added, broadcasting: 3\nI0622 11:48:56.785915 2086 log.go:172] (0xc000138630) Reply frame received for 3\nI0622 11:48:56.785953 2086 log.go:172] (0xc000138630) (0xc000424000) Create stream\nI0622 11:48:56.785969 2086 log.go:172] (0xc000138630) (0xc000424000) Stream added, broadcasting: 5\nI0622 11:48:56.786959 2086 log.go:172] (0xc000138630) Reply frame received for 5\nI0622 11:48:56.843648 2086 log.go:172] (0xc000138630) Data frame received for 5\nI0622 11:48:56.843671 2086 log.go:172] (0xc000424000) (5) Data frame handling\nI0622 11:48:56.843716 2086 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:48:56.843762 2086 log.go:172] (0xc0006f8d20) (3) Data frame handling\nI0622 11:48:56.843787 2086 log.go:172] (0xc0006f8d20) (3) Data frame sent\nI0622 11:48:56.843804 2086 log.go:172] (0xc000138630) Data frame received for 3\nI0622 11:48:56.843820 2086 log.go:172] (0xc0006f8d20) (3) Data frame handling\nI0622 11:48:56.845928 2086 log.go:172] (0xc000138630) Data frame received for 1\nI0622 11:48:56.845951 2086 log.go:172] (0xc00073a640) (1) Data frame handling\nI0622 11:48:56.845963 2086 log.go:172] (0xc00073a640) (1) Data frame sent\nI0622 11:48:56.845976 2086 log.go:172] (0xc000138630) (0xc00073a640) Stream removed, broadcasting: 1\nI0622 11:48:56.845996 2086 log.go:172] (0xc000138630) Go away received\nI0622 11:48:56.846242 2086 log.go:172] (0xc000138630) (0xc00073a640) Stream removed, broadcasting: 1\nI0622 11:48:56.846272 2086 log.go:172] (0xc000138630) (0xc0006f8d20) Stream removed, broadcasting: 3\nI0622 11:48:56.846284 2086 log.go:172] (0xc000138630) (0xc000424000) Stream removed, broadcasting: 5\n" Jun 22 11:48:56.852: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 11:48:56.852: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 11:48:56.852: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 22 11:49:36.891: INFO: Deleting all statefulset in ns e2e-tests-statefulset-r7n8f Jun 22 11:49:36.895: INFO: Scaling statefulset ss to 0 Jun 22 11:49:36.905: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 11:49:36.908: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:49:36.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-r7n8f" for this suite. Jun 22 11:49:42.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:49:43.051: INFO: namespace: e2e-tests-statefulset-r7n8f, resource: bindings, ignored listing per whitelist Jun 22 11:49:43.055: INFO: namespace e2e-tests-statefulset-r7n8f deletion completed in 6.106806476s • [SLOW TEST:118.849 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:49:43.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:49:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-l6b4p" for this suite. Jun 22 11:49:53.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:49:53.286: INFO: namespace: e2e-tests-kubelet-test-l6b4p, resource: bindings, ignored listing per whitelist Jun 22 11:49:53.337: INFO: namespace e2e-tests-kubelet-test-l6b4p deletion completed in 6.140640218s • [SLOW TEST:10.281 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:49:53.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-7c687b63-b47e-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 11:49:53.454: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-vq8jb" to be "success or failure" Jun 22 11:49:53.498: INFO: Pod "pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 43.402567ms Jun 22 11:49:55.502: INFO: Pod "pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047547373s Jun 22 11:49:57.505: INFO: Pod "pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051335321s STEP: Saw pod success Jun 22 11:49:57.506: INFO: Pod "pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:49:57.508: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 22 11:49:57.539: INFO: Waiting for pod pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:49:57.566: INFO: Pod pod-projected-secrets-7c6ab7bd-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:49:57.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vq8jb" for this suite. Jun 22 11:50:03.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:50:03.642: INFO: namespace: e2e-tests-projected-vq8jb, resource: bindings, ignored listing per whitelist Jun 22 11:50:03.647: INFO: namespace e2e-tests-projected-vq8jb deletion completed in 6.07920266s • [SLOW TEST:10.310 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:50:03.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8290f190-b47e-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:50:03.776: INFO: Waiting up to 5m0s for pod "pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-8cx5j" to be "success or failure" Jun 22 11:50:03.794: INFO: Pod "pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.228328ms Jun 22 11:50:05.799: INFO: Pod "pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023082134s Jun 22 11:50:07.802: INFO: Pod "pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026334434s STEP: Saw pod success Jun 22 11:50:07.803: INFO: Pod "pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:50:07.805: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 11:50:07.861: INFO: Waiting for pod pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:50:07.883: INFO: Pod pod-configmaps-8291bb16-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:50:07.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8cx5j" for this suite. Jun 22 11:50:13.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:50:13.972: INFO: namespace: e2e-tests-configmap-8cx5j, resource: bindings, ignored listing per whitelist Jun 22 11:50:14.049: INFO: namespace e2e-tests-configmap-8cx5j deletion completed in 6.160903142s • [SLOW TEST:10.401 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:50:14.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-nlslr I0622 11:50:14.187125 7 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-nlslr, replica count: 1 I0622 11:50:15.237578 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 11:50:16.237807 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 11:50:17.238025 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 11:50:18.238249 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 11:50:18.390: INFO: Created: latency-svc-t7h4s Jun 22 11:50:18.407: INFO: Got endpoints: latency-svc-t7h4s [69.050381ms] Jun 22 11:50:18.430: INFO: Created: latency-svc-r9qt8 Jun 22 11:50:18.484: INFO: Got endpoints: latency-svc-r9qt8 [76.467135ms] Jun 22 11:50:18.527: INFO: Created: latency-svc-tjl7g Jun 22 11:50:18.538: INFO: Got endpoints: latency-svc-tjl7g [130.383549ms] Jun 22 11:50:18.558: INFO: Created: latency-svc-ldpkr Jun 22 11:50:18.572: INFO: Got endpoints: latency-svc-ldpkr [164.62189ms] Jun 22 11:50:18.595: INFO: Created: latency-svc-sbtpv Jun 22 11:50:18.671: INFO: Got endpoints: latency-svc-sbtpv [263.5783ms] Jun 22 11:50:18.700: INFO: Created: latency-svc-kct6g Jun 22 11:50:18.724: INFO: Got endpoints: latency-svc-kct6g [316.373301ms] Jun 22 11:50:18.746: INFO: Created: latency-svc-rmk5k Jun 22 11:50:18.760: INFO: Got endpoints: latency-svc-rmk5k [352.449844ms] Jun 22 11:50:18.815: INFO: Created: latency-svc-l9psf Jun 22 11:50:18.818: INFO: Got endpoints: latency-svc-l9psf [410.444392ms] Jun 22 11:50:18.874: INFO: Created: latency-svc-bcfpk Jun 22 11:50:18.886: INFO: Got endpoints: latency-svc-bcfpk [478.776936ms] Jun 22 11:50:18.904: INFO: Created: latency-svc-48gjm Jun 22 11:50:18.946: INFO: Got endpoints: latency-svc-48gjm [539.03555ms] Jun 22 11:50:18.954: INFO: Created: latency-svc-87fqt Jun 22 11:50:18.965: INFO: Got endpoints: latency-svc-87fqt [558.06828ms] Jun 22 11:50:19.008: INFO: Created: latency-svc-nj4h5 Jun 22 11:50:19.024: INFO: Got endpoints: latency-svc-nj4h5 [616.505928ms] Jun 22 11:50:19.078: INFO: Created: latency-svc-ffkm5 Jun 22 11:50:19.081: INFO: Got endpoints: latency-svc-ffkm5 [673.8621ms] Jun 22 11:50:19.108: INFO: Created: latency-svc-c7dhr Jun 22 11:50:19.127: INFO: Got endpoints: latency-svc-c7dhr [719.779427ms] Jun 22 11:50:19.170: INFO: Created: latency-svc-crczc Jun 22 11:50:19.222: INFO: Got endpoints: latency-svc-crczc [814.451653ms] Jun 22 11:50:19.224: INFO: Created: latency-svc-qrlbl Jun 22 11:50:19.253: INFO: Got endpoints: latency-svc-qrlbl [846.090529ms] Jun 22 11:50:19.282: INFO: Created: latency-svc-928wl Jun 22 11:50:19.371: INFO: Got endpoints: latency-svc-928wl [887.570412ms] Jun 22 11:50:19.379: INFO: Created: latency-svc-w9kvj Jun 22 11:50:19.391: INFO: Got endpoints: latency-svc-w9kvj [853.883586ms] Jun 22 11:50:19.416: INFO: Created: latency-svc-mb5tx Jun 22 11:50:19.428: INFO: Got endpoints: latency-svc-mb5tx [855.545591ms] Jun 22 11:50:19.463: INFO: Created: latency-svc-pzfsz Jun 22 11:50:19.509: INFO: Got endpoints: latency-svc-pzfsz [838.337483ms] Jun 22 11:50:19.522: INFO: Created: latency-svc-fpvpm Jun 22 11:50:19.524: INFO: Got endpoints: latency-svc-fpvpm [800.496337ms] Jun 22 11:50:19.554: INFO: Created: latency-svc-9h5zn Jun 22 11:50:19.560: INFO: Got endpoints: latency-svc-9h5zn [800.736096ms] Jun 22 11:50:19.584: INFO: Created: latency-svc-pkn8d Jun 22 11:50:19.601: INFO: Got endpoints: latency-svc-pkn8d [783.704306ms] Jun 22 11:50:19.665: INFO: Created: latency-svc-wjv9s Jun 22 11:50:19.669: INFO: Got endpoints: latency-svc-wjv9s [782.97496ms] Jun 22 11:50:19.739: INFO: Created: latency-svc-nb77m Jun 22 11:50:19.747: INFO: Got endpoints: latency-svc-nb77m [801.034192ms] Jun 22 11:50:19.834: INFO: Created: latency-svc-47mnf Jun 22 11:50:19.843: INFO: Got endpoints: latency-svc-47mnf [878.062938ms] Jun 22 11:50:19.864: INFO: Created: latency-svc-zb2cs Jun 22 11:50:19.880: INFO: Got endpoints: latency-svc-zb2cs [855.836602ms] Jun 22 11:50:19.900: INFO: Created: latency-svc-wlrhw Jun 22 11:50:19.910: INFO: Got endpoints: latency-svc-wlrhw [829.130104ms] Jun 22 11:50:19.930: INFO: Created: latency-svc-x4q69 Jun 22 11:50:19.988: INFO: Got endpoints: latency-svc-x4q69 [860.94345ms] Jun 22 11:50:20.010: INFO: Created: latency-svc-gq54q Jun 22 11:50:20.025: INFO: Got endpoints: latency-svc-gq54q [802.922515ms] Jun 22 11:50:20.069: INFO: Created: latency-svc-5q654 Jun 22 11:50:20.079: INFO: Got endpoints: latency-svc-5q654 [825.498629ms] Jun 22 11:50:20.139: INFO: Created: latency-svc-tdm6z Jun 22 11:50:20.142: INFO: Got endpoints: latency-svc-tdm6z [770.248532ms] Jun 22 11:50:20.220: INFO: Created: latency-svc-s6j9k Jun 22 11:50:20.236: INFO: Got endpoints: latency-svc-s6j9k [844.278176ms] Jun 22 11:50:20.288: INFO: Created: latency-svc-bnz8f Jun 22 11:50:20.290: INFO: Got endpoints: latency-svc-bnz8f [862.494892ms] Jun 22 11:50:20.351: INFO: Created: latency-svc-jnf8w Jun 22 11:50:20.368: INFO: Got endpoints: latency-svc-jnf8w [858.814285ms] Jun 22 11:50:20.431: INFO: Created: latency-svc-kjjcf Jun 22 11:50:20.452: INFO: Got endpoints: latency-svc-kjjcf [927.769684ms] Jun 22 11:50:20.506: INFO: Created: latency-svc-qtp8b Jun 22 11:50:20.530: INFO: Got endpoints: latency-svc-qtp8b [969.596362ms] Jun 22 11:50:20.611: INFO: Created: latency-svc-xngbt Jun 22 11:50:20.614: INFO: Got endpoints: latency-svc-xngbt [1.012982402s] Jun 22 11:50:20.667: INFO: Created: latency-svc-mtznl Jun 22 11:50:20.675: INFO: Got endpoints: latency-svc-mtznl [1.005858601s] Jun 22 11:50:20.692: INFO: Created: latency-svc-6wtqn Jun 22 11:50:20.755: INFO: Got endpoints: latency-svc-6wtqn [1.007071074s] Jun 22 11:50:20.770: INFO: Created: latency-svc-rl6tb Jun 22 11:50:20.784: INFO: Got endpoints: latency-svc-rl6tb [940.005971ms] Jun 22 11:50:20.808: INFO: Created: latency-svc-bbgz2 Jun 22 11:50:20.826: INFO: Got endpoints: latency-svc-bbgz2 [946.242276ms] Jun 22 11:50:20.844: INFO: Created: latency-svc-xxj4r Jun 22 11:50:20.902: INFO: Got endpoints: latency-svc-xxj4r [991.700719ms] Jun 22 11:50:20.932: INFO: Created: latency-svc-m2zx5 Jun 22 11:50:20.946: INFO: Got endpoints: latency-svc-m2zx5 [958.263926ms] Jun 22 11:50:20.968: INFO: Created: latency-svc-qncf5 Jun 22 11:50:20.977: INFO: Got endpoints: latency-svc-qncf5 [952.14205ms] Jun 22 11:50:21.031: INFO: Created: latency-svc-jdvhv Jun 22 11:50:21.034: INFO: Got endpoints: latency-svc-jdvhv [955.090751ms] Jun 22 11:50:21.059: INFO: Created: latency-svc-pcxgc Jun 22 11:50:21.074: INFO: Got endpoints: latency-svc-pcxgc [932.071329ms] Jun 22 11:50:21.118: INFO: Created: latency-svc-kn6ck Jun 22 11:50:21.198: INFO: Got endpoints: latency-svc-kn6ck [962.103717ms] Jun 22 11:50:21.200: INFO: Created: latency-svc-56794 Jun 22 11:50:21.233: INFO: Got endpoints: latency-svc-56794 [943.030444ms] Jun 22 11:50:21.270: INFO: Created: latency-svc-g9j8n Jun 22 11:50:21.284: INFO: Got endpoints: latency-svc-g9j8n [915.971518ms] Jun 22 11:50:21.336: INFO: Created: latency-svc-sg7hv Jun 22 11:50:21.338: INFO: Got endpoints: latency-svc-sg7hv [886.512616ms] Jun 22 11:50:21.388: INFO: Created: latency-svc-zdqlt Jun 22 11:50:21.399: INFO: Got endpoints: latency-svc-zdqlt [868.730677ms] Jun 22 11:50:21.420: INFO: Created: latency-svc-5cpts Jun 22 11:50:21.435: INFO: Got endpoints: latency-svc-5cpts [820.739689ms] Jun 22 11:50:21.498: INFO: Created: latency-svc-87xn6 Jun 22 11:50:21.502: INFO: Got endpoints: latency-svc-87xn6 [826.774946ms] Jun 22 11:50:21.538: INFO: Created: latency-svc-jj8nb Jun 22 11:50:21.562: INFO: Got endpoints: latency-svc-jj8nb [807.403436ms] Jun 22 11:50:21.594: INFO: Created: latency-svc-xvdm5 Jun 22 11:50:21.635: INFO: Got endpoints: latency-svc-xvdm5 [851.106379ms] Jun 22 11:50:21.647: INFO: Created: latency-svc-7xmxr Jun 22 11:50:21.664: INFO: Got endpoints: latency-svc-7xmxr [838.310085ms] Jun 22 11:50:21.690: INFO: Created: latency-svc-2sfx4 Jun 22 11:50:21.706: INFO: Got endpoints: latency-svc-2sfx4 [804.531421ms] Jun 22 11:50:21.791: INFO: Created: latency-svc-vq6x9 Jun 22 11:50:21.793: INFO: Got endpoints: latency-svc-vq6x9 [846.918139ms] Jun 22 11:50:22.289: INFO: Created: latency-svc-j4bnq Jun 22 11:50:22.291: INFO: Got endpoints: latency-svc-j4bnq [1.314350155s] Jun 22 11:50:22.324: INFO: Created: latency-svc-jfdz8 Jun 22 11:50:22.336: INFO: Got endpoints: latency-svc-jfdz8 [1.302039563s] Jun 22 11:50:22.360: INFO: Created: latency-svc-7r9vc Jun 22 11:50:22.373: INFO: Got endpoints: latency-svc-7r9vc [1.299097587s] Jun 22 11:50:22.444: INFO: Created: latency-svc-5r2qh Jun 22 11:50:22.467: INFO: Got endpoints: latency-svc-5r2qh [1.268989817s] Jun 22 11:50:22.516: INFO: Created: latency-svc-gxmxj Jun 22 11:50:22.524: INFO: Got endpoints: latency-svc-gxmxj [1.290314172s] Jun 22 11:50:22.588: INFO: Created: latency-svc-v29gc Jun 22 11:50:22.590: INFO: Got endpoints: latency-svc-v29gc [1.305839298s] Jun 22 11:50:22.618: INFO: Created: latency-svc-jkk76 Jun 22 11:50:22.632: INFO: Got endpoints: latency-svc-jkk76 [1.293515545s] Jun 22 11:50:22.650: INFO: Created: latency-svc-g2bbl Jun 22 11:50:22.677: INFO: Got endpoints: latency-svc-g2bbl [1.278282391s] Jun 22 11:50:22.749: INFO: Created: latency-svc-4kcrk Jun 22 11:50:22.758: INFO: Got endpoints: latency-svc-4kcrk [1.323057281s] Jun 22 11:50:22.781: INFO: Created: latency-svc-4fcgl Jun 22 11:50:22.795: INFO: Got endpoints: latency-svc-4fcgl [1.293374311s] Jun 22 11:50:22.817: INFO: Created: latency-svc-zk7q7 Jun 22 11:50:22.831: INFO: Got endpoints: latency-svc-zk7q7 [1.269287682s] Jun 22 11:50:22.898: INFO: Created: latency-svc-n2hw8 Jun 22 11:50:22.901: INFO: Got endpoints: latency-svc-n2hw8 [1.266191151s] Jun 22 11:50:22.935: INFO: Created: latency-svc-749xt Jun 22 11:50:22.952: INFO: Got endpoints: latency-svc-749xt [120.537727ms] Jun 22 11:50:22.978: INFO: Created: latency-svc-pblx4 Jun 22 11:50:22.995: INFO: Got endpoints: latency-svc-pblx4 [1.330513284s] Jun 22 11:50:23.054: INFO: Created: latency-svc-tlsrc Jun 22 11:50:23.057: INFO: Got endpoints: latency-svc-tlsrc [1.350645676s] Jun 22 11:50:23.133: INFO: Created: latency-svc-c5msr Jun 22 11:50:23.152: INFO: Got endpoints: latency-svc-c5msr [1.358097976s] Jun 22 11:50:23.212: INFO: Created: latency-svc-8s8qh Jun 22 11:50:23.229: INFO: Got endpoints: latency-svc-8s8qh [937.352013ms] Jun 22 11:50:23.250: INFO: Created: latency-svc-mcldt Jun 22 11:50:23.271: INFO: Got endpoints: latency-svc-mcldt [935.036096ms] Jun 22 11:50:23.307: INFO: Created: latency-svc-f7w9h Jun 22 11:50:23.348: INFO: Got endpoints: latency-svc-f7w9h [974.801545ms] Jun 22 11:50:23.377: INFO: Created: latency-svc-s4gzm Jun 22 11:50:23.404: INFO: Got endpoints: latency-svc-s4gzm [937.419382ms] Jun 22 11:50:23.447: INFO: Created: latency-svc-qvlnx Jun 22 11:50:23.503: INFO: Got endpoints: latency-svc-qvlnx [979.520432ms] Jun 22 11:50:23.524: INFO: Created: latency-svc-bb2np Jun 22 11:50:23.572: INFO: Got endpoints: latency-svc-bb2np [982.20901ms] Jun 22 11:50:23.603: INFO: Created: latency-svc-f48jr Jun 22 11:50:23.645: INFO: Got endpoints: latency-svc-f48jr [1.01267926s] Jun 22 11:50:23.668: INFO: Created: latency-svc-jwvnh Jun 22 11:50:23.715: INFO: Got endpoints: latency-svc-jwvnh [1.037899572s] Jun 22 11:50:23.785: INFO: Created: latency-svc-6rpzl Jun 22 11:50:23.812: INFO: Got endpoints: latency-svc-6rpzl [1.054038689s] Jun 22 11:50:24.850: INFO: Created: latency-svc-8b4kp Jun 22 11:50:24.864: INFO: Got endpoints: latency-svc-8b4kp [2.068714975s] Jun 22 11:50:25.917: INFO: Created: latency-svc-b4n9q Jun 22 11:50:25.917: INFO: Got endpoints: latency-svc-b4n9q [3.016099401s] Jun 22 11:50:25.948: INFO: Created: latency-svc-lcllc Jun 22 11:50:25.964: INFO: Got endpoints: latency-svc-lcllc [3.011952312s] Jun 22 11:50:25.998: INFO: Created: latency-svc-sfr5q Jun 22 11:50:26.066: INFO: Got endpoints: latency-svc-sfr5q [3.071473547s] Jun 22 11:50:26.076: INFO: Created: latency-svc-hrtsx Jun 22 11:50:26.090: INFO: Got endpoints: latency-svc-hrtsx [3.033029417s] Jun 22 11:50:26.126: INFO: Created: latency-svc-wfvjl Jun 22 11:50:26.138: INFO: Got endpoints: latency-svc-wfvjl [2.986865877s] Jun 22 11:50:26.162: INFO: Created: latency-svc-4s842 Jun 22 11:50:26.216: INFO: Got endpoints: latency-svc-4s842 [2.987033308s] Jun 22 11:50:26.232: INFO: Created: latency-svc-wplxv Jun 22 11:50:26.247: INFO: Got endpoints: latency-svc-wplxv [2.975697647s] Jun 22 11:50:26.268: INFO: Created: latency-svc-sgg5d Jun 22 11:50:26.284: INFO: Got endpoints: latency-svc-sgg5d [2.93592497s] Jun 22 11:50:26.305: INFO: Created: latency-svc-zlv7b Jun 22 11:50:26.347: INFO: Got endpoints: latency-svc-zlv7b [2.942887788s] Jun 22 11:50:26.360: INFO: Created: latency-svc-5fh4b Jun 22 11:50:26.389: INFO: Got endpoints: latency-svc-5fh4b [2.886305709s] Jun 22 11:50:26.424: INFO: Created: latency-svc-qs6sf Jun 22 11:50:26.442: INFO: Got endpoints: latency-svc-qs6sf [2.869722231s] Jun 22 11:50:26.516: INFO: Created: latency-svc-mpt2b Jun 22 11:50:26.534: INFO: Got endpoints: latency-svc-mpt2b [2.888683285s] Jun 22 11:50:26.568: INFO: Created: latency-svc-jwt8k Jun 22 11:50:26.585: INFO: Got endpoints: latency-svc-jwt8k [2.870159811s] Jun 22 11:50:26.610: INFO: Created: latency-svc-ts4b6 Jun 22 11:50:26.659: INFO: Got endpoints: latency-svc-ts4b6 [2.846237759s] Jun 22 11:50:26.666: INFO: Created: latency-svc-z94tk Jun 22 11:50:26.682: INFO: Got endpoints: latency-svc-z94tk [1.817552632s] Jun 22 11:50:26.738: INFO: Created: latency-svc-qjqkm Jun 22 11:50:26.791: INFO: Got endpoints: latency-svc-qjqkm [873.773234ms] Jun 22 11:50:26.802: INFO: Created: latency-svc-tdhl5 Jun 22 11:50:26.833: INFO: Got endpoints: latency-svc-tdhl5 [868.743042ms] Jun 22 11:50:26.863: INFO: Created: latency-svc-9mdr8 Jun 22 11:50:26.874: INFO: Got endpoints: latency-svc-9mdr8 [807.680589ms] Jun 22 11:50:26.923: INFO: Created: latency-svc-dj8rz Jun 22 11:50:26.941: INFO: Got endpoints: latency-svc-dj8rz [850.77556ms] Jun 22 11:50:26.966: INFO: Created: latency-svc-gwh7q Jun 22 11:50:26.977: INFO: Got endpoints: latency-svc-gwh7q [838.986319ms] Jun 22 11:50:26.994: INFO: Created: latency-svc-x2mlj Jun 22 11:50:27.007: INFO: Got endpoints: latency-svc-x2mlj [791.39692ms] Jun 22 11:50:27.061: INFO: Created: latency-svc-sffb9 Jun 22 11:50:27.064: INFO: Got endpoints: latency-svc-sffb9 [816.621522ms] Jun 22 11:50:27.098: INFO: Created: latency-svc-kwvgd Jun 22 11:50:27.110: INFO: Got endpoints: latency-svc-kwvgd [826.445697ms] Jun 22 11:50:27.134: INFO: Created: latency-svc-z47fd Jun 22 11:50:27.152: INFO: Got endpoints: latency-svc-z47fd [804.812114ms] Jun 22 11:50:27.212: INFO: Created: latency-svc-zl9ts Jun 22 11:50:27.216: INFO: Got endpoints: latency-svc-zl9ts [826.551922ms] Jun 22 11:50:27.260: INFO: Created: latency-svc-4hsx7 Jun 22 11:50:27.277: INFO: Got endpoints: latency-svc-4hsx7 [835.075689ms] Jun 22 11:50:27.308: INFO: Created: latency-svc-krwbq Jun 22 11:50:27.384: INFO: Got endpoints: latency-svc-krwbq [849.934161ms] Jun 22 11:50:27.408: INFO: Created: latency-svc-7vwvc Jun 22 11:50:27.423: INFO: Got endpoints: latency-svc-7vwvc [837.857254ms] Jun 22 11:50:27.444: INFO: Created: latency-svc-cp74r Jun 22 11:50:27.454: INFO: Got endpoints: latency-svc-cp74r [794.909886ms] Jun 22 11:50:27.481: INFO: Created: latency-svc-42zbw Jun 22 11:50:27.557: INFO: Got endpoints: latency-svc-42zbw [875.795666ms] Jun 22 11:50:27.560: INFO: Created: latency-svc-68sbl Jun 22 11:50:27.568: INFO: Got endpoints: latency-svc-68sbl [777.266535ms] Jun 22 11:50:27.601: INFO: Created: latency-svc-h6zdf Jun 22 11:50:27.617: INFO: Got endpoints: latency-svc-h6zdf [784.02881ms] Jun 22 11:50:27.636: INFO: Created: latency-svc-nz8xw Jun 22 11:50:27.647: INFO: Got endpoints: latency-svc-nz8xw [772.558879ms] Jun 22 11:50:27.689: INFO: Created: latency-svc-7lmg7 Jun 22 11:50:27.692: INFO: Got endpoints: latency-svc-7lmg7 [751.033546ms] Jun 22 11:50:27.722: INFO: Created: latency-svc-82dwz Jun 22 11:50:27.738: INFO: Got endpoints: latency-svc-82dwz [760.611393ms] Jun 22 11:50:27.763: INFO: Created: latency-svc-lpvmk Jun 22 11:50:27.780: INFO: Got endpoints: latency-svc-lpvmk [772.529715ms] Jun 22 11:50:27.828: INFO: Created: latency-svc-mf5zv Jun 22 11:50:27.864: INFO: Got endpoints: latency-svc-mf5zv [800.405127ms] Jun 22 11:50:27.894: INFO: Created: latency-svc-7dvwg Jun 22 11:50:27.908: INFO: Got endpoints: latency-svc-7dvwg [797.767223ms] Jun 22 11:50:27.956: INFO: Created: latency-svc-66frp Jun 22 11:50:27.966: INFO: Got endpoints: latency-svc-66frp [814.173534ms] Jun 22 11:50:28.002: INFO: Created: latency-svc-dxjmj Jun 22 11:50:28.033: INFO: Got endpoints: latency-svc-dxjmj [816.899789ms] Jun 22 11:50:28.097: INFO: Created: latency-svc-nhsnx Jun 22 11:50:28.100: INFO: Got endpoints: latency-svc-nhsnx [822.888067ms] Jun 22 11:50:28.147: INFO: Created: latency-svc-9nh5b Jun 22 11:50:28.165: INFO: Got endpoints: latency-svc-9nh5b [781.360265ms] Jun 22 11:50:28.188: INFO: Created: latency-svc-mmdc2 Jun 22 11:50:28.258: INFO: Got endpoints: latency-svc-mmdc2 [834.33814ms] Jun 22 11:50:28.285: INFO: Created: latency-svc-gltbx Jun 22 11:50:28.310: INFO: Got endpoints: latency-svc-gltbx [855.882418ms] Jun 22 11:50:28.339: INFO: Created: latency-svc-mtvv8 Jun 22 11:50:28.352: INFO: Got endpoints: latency-svc-mtvv8 [794.906936ms] Jun 22 11:50:28.413: INFO: Created: latency-svc-22zm7 Jun 22 11:50:28.452: INFO: Created: latency-svc-wvxs8 Jun 22 11:50:28.483: INFO: Got endpoints: latency-svc-22zm7 [914.832732ms] Jun 22 11:50:28.484: INFO: Created: latency-svc-frgc8 Jun 22 11:50:28.498: INFO: Got endpoints: latency-svc-frgc8 [851.168366ms] Jun 22 11:50:28.557: INFO: Got endpoints: latency-svc-wvxs8 [940.392687ms] Jun 22 11:50:28.557: INFO: Created: latency-svc-lw6jx Jun 22 11:50:28.563: INFO: Got endpoints: latency-svc-lw6jx [870.914282ms] Jun 22 11:50:28.609: INFO: Created: latency-svc-xw5gp Jun 22 11:50:28.636: INFO: Got endpoints: latency-svc-xw5gp [897.630062ms] Jun 22 11:50:28.714: INFO: Created: latency-svc-k88mm Jun 22 11:50:28.717: INFO: Got endpoints: latency-svc-k88mm [937.33208ms] Jun 22 11:50:28.772: INFO: Created: latency-svc-8ckfq Jun 22 11:50:28.806: INFO: Got endpoints: latency-svc-8ckfq [942.07417ms] Jun 22 11:50:28.857: INFO: Created: latency-svc-pkmm9 Jun 22 11:50:28.873: INFO: Got endpoints: latency-svc-pkmm9 [965.164698ms] Jun 22 11:50:28.916: INFO: Created: latency-svc-2j72h Jun 22 11:50:28.937: INFO: Got endpoints: latency-svc-2j72h [970.299041ms] Jun 22 11:50:29.006: INFO: Created: latency-svc-r9c6r Jun 22 11:50:29.010: INFO: Got endpoints: latency-svc-r9c6r [976.374533ms] Jun 22 11:50:29.040: INFO: Created: latency-svc-lnncc Jun 22 11:50:29.051: INFO: Got endpoints: latency-svc-lnncc [950.980305ms] Jun 22 11:50:29.082: INFO: Created: latency-svc-tcxsx Jun 22 11:50:29.100: INFO: Got endpoints: latency-svc-tcxsx [935.012846ms] Jun 22 11:50:29.151: INFO: Created: latency-svc-dr57l Jun 22 11:50:29.154: INFO: Got endpoints: latency-svc-dr57l [895.967393ms] Jun 22 11:50:29.186: INFO: Created: latency-svc-l6qgw Jun 22 11:50:29.203: INFO: Got endpoints: latency-svc-l6qgw [892.985491ms] Jun 22 11:50:29.238: INFO: Created: latency-svc-hwwlg Jun 22 11:50:29.282: INFO: Got endpoints: latency-svc-hwwlg [929.458819ms] Jun 22 11:50:29.286: INFO: Created: latency-svc-twhnl Jun 22 11:50:29.299: INFO: Got endpoints: latency-svc-twhnl [815.77614ms] Jun 22 11:50:29.323: INFO: Created: latency-svc-kwbsx Jun 22 11:50:29.335: INFO: Got endpoints: latency-svc-kwbsx [836.992041ms] Jun 22 11:50:29.366: INFO: Created: latency-svc-4lmwc Jun 22 11:50:29.415: INFO: Got endpoints: latency-svc-4lmwc [857.818744ms] Jun 22 11:50:29.430: INFO: Created: latency-svc-kw6h7 Jun 22 11:50:29.444: INFO: Got endpoints: latency-svc-kw6h7 [880.709217ms] Jun 22 11:50:29.467: INFO: Created: latency-svc-rqxgv Jun 22 11:50:29.481: INFO: Got endpoints: latency-svc-rqxgv [844.637113ms] Jun 22 11:50:29.576: INFO: Created: latency-svc-cvhmv Jun 22 11:50:29.578: INFO: Got endpoints: latency-svc-cvhmv [860.539719ms] Jun 22 11:50:29.599: INFO: Created: latency-svc-hzcdn Jun 22 11:50:29.613: INFO: Got endpoints: latency-svc-hzcdn [806.441851ms] Jun 22 11:50:29.628: INFO: Created: latency-svc-lgk8l Jun 22 11:50:29.643: INFO: Got endpoints: latency-svc-lgk8l [769.865521ms] Jun 22 11:50:29.664: INFO: Created: latency-svc-7w7xs Jun 22 11:50:29.726: INFO: Got endpoints: latency-svc-7w7xs [788.859825ms] Jun 22 11:50:29.737: INFO: Created: latency-svc-cs2wc Jun 22 11:50:29.767: INFO: Got endpoints: latency-svc-cs2wc [757.506656ms] Jun 22 11:50:29.798: INFO: Created: latency-svc-m7psh Jun 22 11:50:29.881: INFO: Got endpoints: latency-svc-m7psh [829.558522ms] Jun 22 11:50:29.893: INFO: Created: latency-svc-4ltd8 Jun 22 11:50:29.909: INFO: Got endpoints: latency-svc-4ltd8 [808.330278ms] Jun 22 11:50:29.930: INFO: Created: latency-svc-npdqf Jun 22 11:50:29.945: INFO: Got endpoints: latency-svc-npdqf [790.825728ms] Jun 22 11:50:29.966: INFO: Created: latency-svc-p9c26 Jun 22 11:50:30.054: INFO: Got endpoints: latency-svc-p9c26 [851.530561ms] Jun 22 11:50:30.056: INFO: Created: latency-svc-clx4q Jun 22 11:50:30.077: INFO: Got endpoints: latency-svc-clx4q [795.179337ms] Jun 22 11:50:30.115: INFO: Created: latency-svc-rj27w Jun 22 11:50:30.144: INFO: Got endpoints: latency-svc-rj27w [844.51981ms] Jun 22 11:50:30.228: INFO: Created: latency-svc-p2v9t Jun 22 11:50:30.283: INFO: Got endpoints: latency-svc-p2v9t [947.33941ms] Jun 22 11:50:30.283: INFO: Created: latency-svc-9hfbp Jun 22 11:50:30.312: INFO: Got endpoints: latency-svc-9hfbp [896.738737ms] Jun 22 11:50:30.396: INFO: Created: latency-svc-gr5kt Jun 22 11:50:30.399: INFO: Got endpoints: latency-svc-gr5kt [954.669164ms] Jun 22 11:50:30.451: INFO: Created: latency-svc-flkjx Jun 22 11:50:30.480: INFO: Got endpoints: latency-svc-flkjx [999.781731ms] Jun 22 11:50:30.534: INFO: Created: latency-svc-vrx9g Jun 22 11:50:30.536: INFO: Got endpoints: latency-svc-vrx9g [958.324275ms] Jun 22 11:50:30.570: INFO: Created: latency-svc-nwr2z Jun 22 11:50:30.583: INFO: Got endpoints: latency-svc-nwr2z [970.006567ms] Jun 22 11:50:30.601: INFO: Created: latency-svc-xkv5q Jun 22 11:50:30.613: INFO: Got endpoints: latency-svc-xkv5q [969.879355ms] Jun 22 11:50:30.707: INFO: Created: latency-svc-bcmtj Jun 22 11:50:30.720: INFO: Got endpoints: latency-svc-bcmtj [993.978921ms] Jun 22 11:50:30.744: INFO: Created: latency-svc-x7lp5 Jun 22 11:50:30.758: INFO: Got endpoints: latency-svc-x7lp5 [990.50085ms] Jun 22 11:50:30.775: INFO: Created: latency-svc-45z76 Jun 22 11:50:30.788: INFO: Got endpoints: latency-svc-45z76 [907.021613ms] Jun 22 11:50:30.806: INFO: Created: latency-svc-tt9sb Jun 22 11:50:30.857: INFO: Got endpoints: latency-svc-tt9sb [948.30353ms] Jun 22 11:50:30.876: INFO: Created: latency-svc-fg4b4 Jun 22 11:50:30.891: INFO: Got endpoints: latency-svc-fg4b4 [946.148526ms] Jun 22 11:50:30.912: INFO: Created: latency-svc-6v5sk Jun 22 11:50:30.921: INFO: Got endpoints: latency-svc-6v5sk [866.51969ms] Jun 22 11:50:30.944: INFO: Created: latency-svc-dbhwg Jun 22 11:50:30.995: INFO: Got endpoints: latency-svc-dbhwg [917.684369ms] Jun 22 11:50:30.997: INFO: Created: latency-svc-bvncw Jun 22 11:50:31.012: INFO: Got endpoints: latency-svc-bvncw [868.388328ms] Jun 22 11:50:31.034: INFO: Created: latency-svc-7wpbb Jun 22 11:50:31.048: INFO: Got endpoints: latency-svc-7wpbb [765.433175ms] Jun 22 11:50:31.068: INFO: Created: latency-svc-5zzth Jun 22 11:50:31.085: INFO: Got endpoints: latency-svc-5zzth [772.615947ms] Jun 22 11:50:31.146: INFO: Created: latency-svc-pzfr9 Jun 22 11:50:31.163: INFO: Got endpoints: latency-svc-pzfr9 [764.03607ms] Jun 22 11:50:31.183: INFO: Created: latency-svc-vbd2j Jun 22 11:50:31.199: INFO: Got endpoints: latency-svc-vbd2j [718.947594ms] Jun 22 11:50:31.225: INFO: Created: latency-svc-x7tnf Jun 22 11:50:31.288: INFO: Got endpoints: latency-svc-x7tnf [751.36959ms] Jun 22 11:50:31.308: INFO: Created: latency-svc-j454n Jun 22 11:50:31.332: INFO: Got endpoints: latency-svc-j454n [749.079256ms] Jun 22 11:50:31.363: INFO: Created: latency-svc-hk9k8 Jun 22 11:50:31.380: INFO: Got endpoints: latency-svc-hk9k8 [766.484968ms] Jun 22 11:50:31.420: INFO: Created: latency-svc-942tx Jun 22 11:50:31.423: INFO: Got endpoints: latency-svc-942tx [702.75377ms] Jun 22 11:50:31.461: INFO: Created: latency-svc-tr4cr Jun 22 11:50:31.488: INFO: Got endpoints: latency-svc-tr4cr [730.777389ms] Jun 22 11:50:31.557: INFO: Created: latency-svc-bdg8p Jun 22 11:50:31.560: INFO: Got endpoints: latency-svc-bdg8p [771.829551ms] Jun 22 11:50:31.591: INFO: Created: latency-svc-grhq2 Jun 22 11:50:31.597: INFO: Got endpoints: latency-svc-grhq2 [739.844348ms] Jun 22 11:50:31.615: INFO: Created: latency-svc-xq5gx Jun 22 11:50:31.627: INFO: Got endpoints: latency-svc-xq5gx [736.414625ms] Jun 22 11:50:31.645: INFO: Created: latency-svc-p4qpm Jun 22 11:50:31.713: INFO: Got endpoints: latency-svc-p4qpm [792.15094ms] Jun 22 11:50:31.715: INFO: Created: latency-svc-tb62r Jun 22 11:50:31.724: INFO: Got endpoints: latency-svc-tb62r [728.958428ms] Jun 22 11:50:31.747: INFO: Created: latency-svc-lzf49 Jun 22 11:50:31.760: INFO: Got endpoints: latency-svc-lzf49 [748.13892ms] Jun 22 11:50:31.777: INFO: Created: latency-svc-zfdgc Jun 22 11:50:31.790: INFO: Got endpoints: latency-svc-zfdgc [742.352198ms] Jun 22 11:50:31.807: INFO: Created: latency-svc-tk5s4 Jun 22 11:50:31.851: INFO: Got endpoints: latency-svc-tk5s4 [765.701425ms] Jun 22 11:50:31.862: INFO: Created: latency-svc-rtgwr Jun 22 11:50:31.881: INFO: Got endpoints: latency-svc-rtgwr [718.694967ms] Jun 22 11:50:31.902: INFO: Created: latency-svc-kk8xd Jun 22 11:50:31.911: INFO: Got endpoints: latency-svc-kk8xd [711.977568ms] Jun 22 11:50:31.932: INFO: Created: latency-svc-4t4t6 Jun 22 11:50:31.942: INFO: Got endpoints: latency-svc-4t4t6 [654.130672ms] Jun 22 11:50:31.989: INFO: Created: latency-svc-bpzfv Jun 22 11:50:31.992: INFO: Got endpoints: latency-svc-bpzfv [659.572023ms] Jun 22 11:50:32.046: INFO: Created: latency-svc-49t4z Jun 22 11:50:32.056: INFO: Got endpoints: latency-svc-49t4z [676.603618ms] Jun 22 11:50:32.186: INFO: Created: latency-svc-2kwkk Jun 22 11:50:32.189: INFO: Got endpoints: latency-svc-2kwkk [766.219492ms] Jun 22 11:50:32.216: INFO: Created: latency-svc-svlkc Jun 22 11:50:32.246: INFO: Got endpoints: latency-svc-svlkc [756.971987ms] Jun 22 11:50:32.274: INFO: Created: latency-svc-jdlsz Jun 22 11:50:32.335: INFO: Got endpoints: latency-svc-jdlsz [775.556617ms] Jun 22 11:50:32.335: INFO: Latencies: [76.467135ms 120.537727ms 130.383549ms 164.62189ms 263.5783ms 316.373301ms 352.449844ms 410.444392ms 478.776936ms 539.03555ms 558.06828ms 616.505928ms 654.130672ms 659.572023ms 673.8621ms 676.603618ms 702.75377ms 711.977568ms 718.694967ms 718.947594ms 719.779427ms 728.958428ms 730.777389ms 736.414625ms 739.844348ms 742.352198ms 748.13892ms 749.079256ms 751.033546ms 751.36959ms 756.971987ms 757.506656ms 760.611393ms 764.03607ms 765.433175ms 765.701425ms 766.219492ms 766.484968ms 769.865521ms 770.248532ms 771.829551ms 772.529715ms 772.558879ms 772.615947ms 775.556617ms 777.266535ms 781.360265ms 782.97496ms 783.704306ms 784.02881ms 788.859825ms 790.825728ms 791.39692ms 792.15094ms 794.906936ms 794.909886ms 795.179337ms 797.767223ms 800.405127ms 800.496337ms 800.736096ms 801.034192ms 802.922515ms 804.531421ms 804.812114ms 806.441851ms 807.403436ms 807.680589ms 808.330278ms 814.173534ms 814.451653ms 815.77614ms 816.621522ms 816.899789ms 820.739689ms 822.888067ms 825.498629ms 826.445697ms 826.551922ms 826.774946ms 829.130104ms 829.558522ms 834.33814ms 835.075689ms 836.992041ms 837.857254ms 838.310085ms 838.337483ms 838.986319ms 844.278176ms 844.51981ms 844.637113ms 846.090529ms 846.918139ms 849.934161ms 850.77556ms 851.106379ms 851.168366ms 851.530561ms 853.883586ms 855.545591ms 855.836602ms 855.882418ms 857.818744ms 858.814285ms 860.539719ms 860.94345ms 862.494892ms 866.51969ms 868.388328ms 868.730677ms 868.743042ms 870.914282ms 873.773234ms 875.795666ms 878.062938ms 880.709217ms 886.512616ms 887.570412ms 892.985491ms 895.967393ms 896.738737ms 897.630062ms 907.021613ms 914.832732ms 915.971518ms 917.684369ms 927.769684ms 929.458819ms 932.071329ms 935.012846ms 935.036096ms 937.33208ms 937.352013ms 937.419382ms 940.005971ms 940.392687ms 942.07417ms 943.030444ms 946.148526ms 946.242276ms 947.33941ms 948.30353ms 950.980305ms 952.14205ms 954.669164ms 955.090751ms 958.263926ms 958.324275ms 962.103717ms 965.164698ms 969.596362ms 969.879355ms 970.006567ms 970.299041ms 974.801545ms 976.374533ms 979.520432ms 982.20901ms 990.50085ms 991.700719ms 993.978921ms 999.781731ms 1.005858601s 1.007071074s 1.01267926s 1.012982402s 1.037899572s 1.054038689s 1.266191151s 1.268989817s 1.269287682s 1.278282391s 1.290314172s 1.293374311s 1.293515545s 1.299097587s 1.302039563s 1.305839298s 1.314350155s 1.323057281s 1.330513284s 1.350645676s 1.358097976s 1.817552632s 2.068714975s 2.846237759s 2.869722231s 2.870159811s 2.886305709s 2.888683285s 2.93592497s 2.942887788s 2.975697647s 2.986865877s 2.987033308s 3.011952312s 3.016099401s 3.033029417s 3.071473547s] Jun 22 11:50:32.336: INFO: 50 %ile: 855.545591ms Jun 22 11:50:32.336: INFO: 90 %ile: 1.323057281s Jun 22 11:50:32.336: INFO: 99 %ile: 3.033029417s Jun 22 11:50:32.336: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:50:32.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-nlslr" for this suite. Jun 22 11:51:00.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:51:00.385: INFO: namespace: e2e-tests-svc-latency-nlslr, resource: bindings, ignored listing per whitelist Jun 22 11:51:00.439: INFO: namespace e2e-tests-svc-latency-nlslr deletion completed in 28.100292918s • [SLOW TEST:46.390 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:51:00.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-trrrd/configmap-test-a4680f36-b47e-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:51:00.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-trrrd" to be "success or failure" Jun 22 11:51:00.558: INFO: Pod "pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.758492ms Jun 22 11:51:02.618: INFO: Pod "pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064370739s Jun 22 11:51:04.623: INFO: Pod "pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068691889s STEP: Saw pod success Jun 22 11:51:04.623: INFO: Pod "pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:51:04.626: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b container env-test: STEP: delete the pod Jun 22 11:51:04.901: INFO: Waiting for pod pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:51:04.905: INFO: Pod pod-configmaps-a469532c-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:51:04.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-trrrd" for this suite. Jun 22 11:51:10.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:51:10.965: INFO: namespace: e2e-tests-configmap-trrrd, resource: bindings, ignored listing per whitelist Jun 22 11:51:11.001: INFO: namespace e2e-tests-configmap-trrrd deletion completed in 6.093581048s • [SLOW TEST:10.562 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:51:11.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-hfhsb STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 11:51:11.098: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 11:51:31.253: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.126:8080/dial?request=hostName&protocol=udp&host=10.244.2.125&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-hfhsb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 11:51:31.253: INFO: >>> kubeConfig: /root/.kube/config I0622 11:51:31.281532 7 log.go:172] (0xc000938bb0) (0xc001ec6d20) Create stream I0622 11:51:31.281572 7 log.go:172] (0xc000938bb0) (0xc001ec6d20) Stream added, broadcasting: 1 I0622 11:51:31.283138 7 log.go:172] (0xc000938bb0) Reply frame received for 1 I0622 11:51:31.283191 7 log.go:172] (0xc000938bb0) (0xc001fbbea0) Create stream I0622 11:51:31.283205 7 log.go:172] (0xc000938bb0) (0xc001fbbea0) Stream added, broadcasting: 3 I0622 11:51:31.284126 7 log.go:172] (0xc000938bb0) Reply frame received for 3 I0622 11:51:31.284161 7 log.go:172] (0xc000938bb0) (0xc0022ed4a0) Create stream I0622 11:51:31.284173 7 log.go:172] (0xc000938bb0) (0xc0022ed4a0) Stream added, broadcasting: 5 I0622 11:51:31.285299 7 log.go:172] (0xc000938bb0) Reply frame received for 5 I0622 11:51:31.392125 7 log.go:172] (0xc000938bb0) Data frame received for 3 I0622 11:51:31.392171 7 log.go:172] (0xc001fbbea0) (3) Data frame handling I0622 11:51:31.392200 7 log.go:172] (0xc001fbbea0) (3) Data frame sent I0622 11:51:31.393045 7 log.go:172] (0xc000938bb0) Data frame received for 5 I0622 11:51:31.393063 7 log.go:172] (0xc0022ed4a0) (5) Data frame handling I0622 11:51:31.393082 7 log.go:172] (0xc000938bb0) Data frame received for 3 I0622 11:51:31.393325 7 log.go:172] (0xc001fbbea0) (3) Data frame handling I0622 11:51:31.395240 7 log.go:172] (0xc000938bb0) Data frame received for 1 I0622 11:51:31.395275 7 log.go:172] (0xc001ec6d20) (1) Data frame handling I0622 11:51:31.395294 7 log.go:172] (0xc001ec6d20) (1) Data frame sent I0622 11:51:31.395319 7 log.go:172] (0xc000938bb0) (0xc001ec6d20) Stream removed, broadcasting: 1 I0622 11:51:31.395408 7 log.go:172] (0xc000938bb0) Go away received I0622 11:51:31.395433 7 log.go:172] (0xc000938bb0) (0xc001ec6d20) Stream removed, broadcasting: 1 I0622 11:51:31.395451 7 log.go:172] (0xc000938bb0) (0xc001fbbea0) Stream removed, broadcasting: 3 I0622 11:51:31.395464 7 log.go:172] (0xc000938bb0) (0xc0022ed4a0) Stream removed, broadcasting: 5 Jun 22 11:51:31.395: INFO: Waiting for endpoints: map[] Jun 22 11:51:31.399: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.126:8080/dial?request=hostName&protocol=udp&host=10.244.1.95&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-hfhsb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 11:51:31.399: INFO: >>> kubeConfig: /root/.kube/config I0622 11:51:31.438912 7 log.go:172] (0xc001978370) (0xc000cac280) Create stream I0622 11:51:31.438941 7 log.go:172] (0xc001978370) (0xc000cac280) Stream added, broadcasting: 1 I0622 11:51:33.106178 7 log.go:172] (0xc001978370) Reply frame received for 1 I0622 11:51:33.106270 7 log.go:172] (0xc001978370) (0xc001ddaf00) Create stream I0622 11:51:33.106294 7 log.go:172] (0xc001978370) (0xc001ddaf00) Stream added, broadcasting: 3 I0622 11:51:33.107456 7 log.go:172] (0xc001978370) Reply frame received for 3 I0622 11:51:33.107506 7 log.go:172] (0xc001978370) (0xc000cac320) Create stream I0622 11:51:33.107528 7 log.go:172] (0xc001978370) (0xc000cac320) Stream added, broadcasting: 5 I0622 11:51:33.108393 7 log.go:172] (0xc001978370) Reply frame received for 5 I0622 11:51:33.197388 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 11:51:33.197422 7 log.go:172] (0xc001ddaf00) (3) Data frame handling I0622 11:51:33.197440 7 log.go:172] (0xc001ddaf00) (3) Data frame sent I0622 11:51:33.198187 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 11:51:33.198213 7 log.go:172] (0xc001ddaf00) (3) Data frame handling I0622 11:51:33.198435 7 log.go:172] (0xc001978370) Data frame received for 5 I0622 11:51:33.198451 7 log.go:172] (0xc000cac320) (5) Data frame handling I0622 11:51:33.199513 7 log.go:172] (0xc001978370) Data frame received for 1 I0622 11:51:33.199545 7 log.go:172] (0xc000cac280) (1) Data frame handling I0622 11:51:33.199582 7 log.go:172] (0xc000cac280) (1) Data frame sent I0622 11:51:33.199618 7 log.go:172] (0xc001978370) (0xc000cac280) Stream removed, broadcasting: 1 I0622 11:51:33.199672 7 log.go:172] (0xc001978370) Go away received I0622 11:51:33.199786 7 log.go:172] (0xc001978370) (0xc000cac280) Stream removed, broadcasting: 1 I0622 11:51:33.199813 7 log.go:172] (0xc001978370) (0xc001ddaf00) Stream removed, broadcasting: 3 I0622 11:51:33.199841 7 log.go:172] (0xc001978370) (0xc000cac320) Stream removed, broadcasting: 5 Jun 22 11:51:33.199: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:51:33.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-hfhsb" for this suite. Jun 22 11:51:57.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:51:57.230: INFO: namespace: e2e-tests-pod-network-test-hfhsb, resource: bindings, ignored listing per whitelist Jun 22 11:51:57.288: INFO: namespace e2e-tests-pod-network-test-hfhsb deletion completed in 24.083441789s • [SLOW TEST:46.286 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:51:57.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 11:51:57.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-8tqxb" to be "success or failure" Jun 22 11:51:57.440: INFO: Pod "downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.238113ms Jun 22 11:51:59.444: INFO: Pod "downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056335126s Jun 22 11:52:01.448: INFO: Pod "downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060477092s STEP: Saw pod success Jun 22 11:52:01.448: INFO: Pod "downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:52:01.451: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 11:52:01.538: INFO: Waiting for pod downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:52:01.542: INFO: Pod downwardapi-volume-c649d3f2-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:52:01.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8tqxb" for this suite. Jun 22 11:52:07.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:52:07.592: INFO: namespace: e2e-tests-projected-8tqxb, resource: bindings, ignored listing per whitelist Jun 22 11:52:07.627: INFO: namespace e2e-tests-projected-8tqxb deletion completed in 6.081181703s • [SLOW TEST:10.339 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:52:07.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jun 22 11:52:07.753: INFO: Waiting up to 5m0s for pod "var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-var-expansion-whx9n" to be "success or failure" Jun 22 11:52:07.757: INFO: Pod "var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873371ms Jun 22 11:52:09.761: INFO: Pod "var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007983914s Jun 22 11:52:11.765: INFO: Pod "var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012401969s STEP: Saw pod success Jun 22 11:52:11.766: INFO: Pod "var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:52:11.768: INFO: Trying to get logs from node hunter-worker pod var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 11:52:11.866: INFO: Waiting for pod var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:52:11.874: INFO: Pod var-expansion-cc765254-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:52:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-whx9n" for this suite. Jun 22 11:52:17.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:52:17.983: INFO: namespace: e2e-tests-var-expansion-whx9n, resource: bindings, ignored listing per whitelist Jun 22 11:52:17.998: INFO: namespace e2e-tests-var-expansion-whx9n deletion completed in 6.121259881s • [SLOW TEST:10.371 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:52:17.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d2a0b555-b47e-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 11:52:18.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-rx8kx" to be "success or failure" Jun 22 11:52:18.107: INFO: Pod "pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593041ms Jun 22 11:52:20.112: INFO: Pod "pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007154861s Jun 22 11:52:22.116: INFO: Pod "pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011236285s STEP: Saw pod success Jun 22 11:52:22.116: INFO: Pod "pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:52:22.119: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 11:52:22.164: INFO: Waiting for pod pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b to disappear Jun 22 11:52:22.169: INFO: Pod pod-configmaps-d2a26a8a-b47e-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:52:22.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rx8kx" for this suite. Jun 22 11:52:28.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:52:28.243: INFO: namespace: e2e-tests-configmap-rx8kx, resource: bindings, ignored listing per whitelist Jun 22 11:52:28.267: INFO: namespace e2e-tests-configmap-rx8kx deletion completed in 6.092718915s • [SLOW TEST:10.268 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:52:28.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jun 22 11:52:28.375: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 22 11:52:28.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:31.183: INFO: stderr: "" Jun 22 11:52:31.183: INFO: stdout: "service/redis-slave created\n" Jun 22 11:52:31.183: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 22 11:52:31.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:31.504: INFO: stderr: "" Jun 22 11:52:31.504: INFO: stdout: "service/redis-master created\n" Jun 22 11:52:31.505: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 22 11:52:31.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:31.800: INFO: stderr: "" Jun 22 11:52:31.801: INFO: stdout: "service/frontend created\n" Jun 22 11:52:31.801: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 22 11:52:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:32.064: INFO: stderr: "" Jun 22 11:52:32.065: INFO: stdout: "deployment.extensions/frontend created\n" Jun 22 11:52:32.065: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 22 11:52:32.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:32.402: INFO: stderr: "" Jun 22 11:52:32.402: INFO: stdout: "deployment.extensions/redis-master created\n" Jun 22 11:52:32.403: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 22 11:52:32.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:32.675: INFO: stderr: "" Jun 22 11:52:32.675: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jun 22 11:52:32.675: INFO: Waiting for all frontend pods to be Running. Jun 22 11:52:42.725: INFO: Waiting for frontend to serve content. Jun 22 11:52:43.533: INFO: Trying to add a new entry to the guestbook. Jun 22 11:52:43.546: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 22 11:52:43.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:43.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:52:43.722: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 22 11:52:43.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:43.866: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:52:43.866: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 22 11:52:43.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:43.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:52:43.995: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 22 11:52:43.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:44.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:52:44.105: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 22 11:52:44.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:44.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:52:44.217: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 22 11:52:44.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f8qvr' Jun 22 11:52:44.342: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:52:44.342: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:52:44.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f8qvr" for this suite. Jun 22 11:53:24.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:53:24.432: INFO: namespace: e2e-tests-kubectl-f8qvr, resource: bindings, ignored listing per whitelist Jun 22 11:53:24.473: INFO: namespace e2e-tests-kubectl-f8qvr deletion completed in 40.118769746s • [SLOW TEST:56.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:53:24.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:53:28.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fq6gd" for this suite. Jun 22 11:54:14.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:54:14.655: INFO: namespace: e2e-tests-kubelet-test-fq6gd, resource: bindings, ignored listing per whitelist Jun 22 11:54:14.722: INFO: namespace e2e-tests-kubelet-test-fq6gd deletion completed in 46.090858137s • [SLOW TEST:50.249 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:54:14.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:54:14.903: INFO: Creating ReplicaSet my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b Jun 22 11:54:15.023: INFO: Pod name my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b: Found 0 pods out of 1 Jun 22 11:54:20.027: INFO: Pod name my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b: Found 1 pods out of 1 Jun 22 11:54:20.027: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b" is running Jun 22 11:54:20.030: INFO: Pod "my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b-6grcw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:54:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:54:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:54:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:54:15 +0000 UTC Reason: Message:}]) Jun 22 11:54:20.030: INFO: Trying to dial the pod Jun 22 11:54:25.041: INFO: Controller my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b: Got expected result from replica 1 [my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b-6grcw]: "my-hostname-basic-1841e15a-b47f-11ea-8cd8-0242ac11001b-6grcw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:54:25.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-gv9rk" for this suite. Jun 22 11:54:31.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:54:31.113: INFO: namespace: e2e-tests-replicaset-gv9rk, resource: bindings, ignored listing per whitelist Jun 22 11:54:31.141: INFO: namespace e2e-tests-replicaset-gv9rk deletion completed in 6.096878853s • [SLOW TEST:16.419 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:54:31.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 11:54:31.320: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2202d500-b47f-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001831b3a), BlockOwnerDeletion:(*bool)(0xc001831b3b)}} Jun 22 11:54:31.383: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2200a1be-b47f-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0026d6c8a), BlockOwnerDeletion:(*bool)(0xc0026d6c8b)}} Jun 22 11:54:31.392: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"220122a8-b47f-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001831d12), BlockOwnerDeletion:(*bool)(0xc001831d13)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:54:36.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-q7lf5" for this suite. Jun 22 11:54:42.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:54:42.532: INFO: namespace: e2e-tests-gc-q7lf5, resource: bindings, ignored listing per whitelist Jun 22 11:54:42.558: INFO: namespace e2e-tests-gc-q7lf5 deletion completed in 6.103581348s • [SLOW TEST:11.417 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:54:42.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 22 11:54:42.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:42.945: INFO: stderr: "" Jun 22 11:54:42.945: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 11:54:42.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:43.130: INFO: stderr: "" Jun 22 11:54:43.130: INFO: stdout: "update-demo-nautilus-4b7nn update-demo-nautilus-9p67n " Jun 22 11:54:43.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4b7nn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:43.237: INFO: stderr: "" Jun 22 11:54:43.237: INFO: stdout: "" Jun 22 11:54:43.237: INFO: update-demo-nautilus-4b7nn is created but not running Jun 22 11:54:48.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.340: INFO: stderr: "" Jun 22 11:54:48.341: INFO: stdout: "update-demo-nautilus-4b7nn update-demo-nautilus-9p67n " Jun 22 11:54:48.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4b7nn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.435: INFO: stderr: "" Jun 22 11:54:48.436: INFO: stdout: "true" Jun 22 11:54:48.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4b7nn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.527: INFO: stderr: "" Jun 22 11:54:48.527: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 11:54:48.527: INFO: validating pod update-demo-nautilus-4b7nn Jun 22 11:54:48.537: INFO: got data: { "image": "nautilus.jpg" } Jun 22 11:54:48.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 11:54:48.537: INFO: update-demo-nautilus-4b7nn is verified up and running Jun 22 11:54:48.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p67n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.645: INFO: stderr: "" Jun 22 11:54:48.645: INFO: stdout: "true" Jun 22 11:54:48.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9p67n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.738: INFO: stderr: "" Jun 22 11:54:48.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 11:54:48.738: INFO: validating pod update-demo-nautilus-9p67n Jun 22 11:54:48.747: INFO: got data: { "image": "nautilus.jpg" } Jun 22 11:54:48.747: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 11:54:48.747: INFO: update-demo-nautilus-9p67n is verified up and running STEP: using delete to clean up resources Jun 22 11:54:48.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.865: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 11:54:48.865: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 22 11:54:48.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-p4mhm' Jun 22 11:54:48.989: INFO: stderr: "No resources found.\n" Jun 22 11:54:48.989: INFO: stdout: "" Jun 22 11:54:48.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-p4mhm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 11:54:49.099: INFO: stderr: "" Jun 22 11:54:49.099: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:54:49.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p4mhm" for this suite. Jun 22 11:55:11.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:55:11.472: INFO: namespace: e2e-tests-kubectl-p4mhm, resource: bindings, ignored listing per whitelist Jun 22 11:55:11.499: INFO: namespace e2e-tests-kubectl-p4mhm deletion completed in 22.395980039s • [SLOW TEST:28.940 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:55:11.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b Jun 22 11:55:11.634: INFO: Pod name my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b: Found 0 pods out of 1 Jun 22 11:55:16.638: INFO: Pod name my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b: Found 1 pods out of 1 Jun 22 11:55:16.638: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b" are running Jun 22 11:55:16.642: INFO: Pod "my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b-ht6b7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:55:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:55:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:55:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 11:55:11 +0000 UTC Reason: Message:}]) Jun 22 11:55:16.642: INFO: Trying to dial the pod Jun 22 11:55:21.676: INFO: Controller my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b: Got expected result from replica 1 [my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b-ht6b7]: "my-hostname-basic-3a0f12d2-b47f-11ea-8cd8-0242ac11001b-ht6b7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:55:21.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hfpx2" for this suite. Jun 22 11:55:27.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:55:27.733: INFO: namespace: e2e-tests-replication-controller-hfpx2, resource: bindings, ignored listing per whitelist Jun 22 11:55:27.776: INFO: namespace e2e-tests-replication-controller-hfpx2 deletion completed in 6.096944246s • [SLOW TEST:16.277 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:55:27.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-43cc662e-b47f-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 11:55:28.008: INFO: Waiting up to 5m0s for pod "pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-89grv" to be "success or failure" Jun 22 11:55:28.017: INFO: Pod "pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.651123ms Jun 22 11:55:30.020: INFO: Pod "pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011836129s Jun 22 11:55:32.029: INFO: Pod "pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020924886s STEP: Saw pod success Jun 22 11:55:32.029: INFO: Pod "pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:55:32.032: INFO: Trying to get logs from node hunter-worker pod pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 11:55:32.062: INFO: Waiting for pod pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b to disappear Jun 22 11:55:32.077: INFO: Pod pod-secrets-43d1985f-b47f-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:55:32.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-89grv" for this suite. Jun 22 11:55:39.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:55:40.626: INFO: namespace: e2e-tests-secrets-89grv, resource: bindings, ignored listing per whitelist Jun 22 11:55:40.631: INFO: namespace e2e-tests-secrets-89grv deletion completed in 8.550518353s • [SLOW TEST:12.854 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:55:40.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 11:55:40.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-wlzdg" to be "success or failure" Jun 22 11:55:40.850: INFO: Pod "downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 70.2125ms Jun 22 11:55:42.855: INFO: Pod "downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074350467s Jun 22 11:55:44.858: INFO: Pod "downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077713886s Jun 22 11:55:46.862: INFO: Pod "downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081386295s STEP: Saw pod success Jun 22 11:55:46.862: INFO: Pod "downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:55:46.864: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 11:55:46.924: INFO: Waiting for pod downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b to disappear Jun 22 11:55:46.957: INFO: Pod downwardapi-volume-4b6e8fd7-b47f-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:55:46.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wlzdg" for this suite. Jun 22 11:55:53.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:55:53.221: INFO: namespace: e2e-tests-projected-wlzdg, resource: bindings, ignored listing per whitelist Jun 22 11:55:53.228: INFO: namespace e2e-tests-projected-wlzdg deletion completed in 6.2396222s • [SLOW TEST:12.596 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:55:53.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 22 11:56:05.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:05.958: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:07.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:07.976: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:09.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:09.963: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:11.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:11.963: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:13.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:13.963: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:15.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:15.988: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:17.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:17.962: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:19.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:19.962: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:21.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:21.962: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:23.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:23.983: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 11:56:25.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 11:56:25.964: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:56:25.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-slh9j" for this suite. Jun 22 11:56:48.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:56:48.033: INFO: namespace: e2e-tests-container-lifecycle-hook-slh9j, resource: bindings, ignored listing per whitelist Jun 22 11:56:48.171: INFO: namespace e2e-tests-container-lifecycle-hook-slh9j deletion completed in 22.175114562s • [SLOW TEST:54.943 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:56:48.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 22 11:57:00.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:00.545: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:02.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:02.550: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:04.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:04.550: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:06.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:06.550: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:08.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:08.549: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:10.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:10.549: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:12.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:12.564: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:14.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:14.550: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:16.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:16.576: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:18.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:18.550: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 11:57:20.545: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 11:57:20.549: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:57:20.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-rfh7q" for this suite. Jun 22 11:57:44.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:57:44.668: INFO: namespace: e2e-tests-container-lifecycle-hook-rfh7q, resource: bindings, ignored listing per whitelist Jun 22 11:57:44.698: INFO: namespace e2e-tests-container-lifecycle-hook-rfh7q deletion completed in 24.144119462s • [SLOW TEST:56.526 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:57:44.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 22 11:57:44.806: INFO: Waiting up to 5m0s for pod "pod-955bf27b-b47f-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-wqkxl" to be "success or failure" Jun 22 11:57:44.811: INFO: Pod "pod-955bf27b-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482251ms Jun 22 11:57:46.815: INFO: Pod "pod-955bf27b-b47f-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008931629s Jun 22 11:57:48.819: INFO: Pod "pod-955bf27b-b47f-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012885565s STEP: Saw pod success Jun 22 11:57:48.819: INFO: Pod "pod-955bf27b-b47f-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 11:57:48.822: INFO: Trying to get logs from node hunter-worker pod pod-955bf27b-b47f-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 11:57:48.979: INFO: Waiting for pod pod-955bf27b-b47f-11ea-8cd8-0242ac11001b to disappear Jun 22 11:57:49.032: INFO: Pod pod-955bf27b-b47f-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:57:49.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wqkxl" for this suite. Jun 22 11:57:55.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:57:55.142: INFO: namespace: e2e-tests-emptydir-wqkxl, resource: bindings, ignored listing per whitelist Jun 22 11:57:55.146: INFO: namespace e2e-tests-emptydir-wqkxl deletion completed in 6.094555571s • [SLOW TEST:10.448 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:57:55.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 22 11:57:55.365: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 11:57:55.390: INFO: Waiting for terminating namespaces to be deleted... Jun 22 11:57:55.392: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 22 11:57:55.398: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 22 11:57:55.398: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 11:57:55.398: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:57:55.398: INFO: Container kindnet-cni ready: true, restart count 0 Jun 22 11:57:55.398: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 22 11:57:55.398: INFO: Container coredns ready: true, restart count 0 Jun 22 11:57:55.398: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 22 11:57:55.406: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:57:55.406: INFO: Container kindnet-cni ready: true, restart count 0 Jun 22 11:57:55.406: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 22 11:57:55.406: INFO: Container coredns ready: true, restart count 0 Jun 22 11:57:55.406: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 22 11:57:55.406: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jun 22 11:57:55.543: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Jun 22 11:57:55.543: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Jun 22 11:57:55.543: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Jun 22 11:57:55.543: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Jun 22 11:57:55.543: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Jun 22 11:57:55.543: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc4d074-b47f-11ea-8cd8-0242ac11001b.161adbc93bd2ebb1], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nr8nz/filler-pod-9bc4d074-b47f-11ea-8cd8-0242ac11001b to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc4d074-b47f-11ea-8cd8-0242ac11001b.161adbc989593afe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc4d074-b47f-11ea-8cd8-0242ac11001b.161adbc9d65938b2], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc4d074-b47f-11ea-8cd8-0242ac11001b.161adbc9e8f842f9], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc61393-b47f-11ea-8cd8-0242ac11001b.161adbc93ccb0aca], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nr8nz/filler-pod-9bc61393-b47f-11ea-8cd8-0242ac11001b to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc61393-b47f-11ea-8cd8-0242ac11001b.161adbc9c183d20e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc61393-b47f-11ea-8cd8-0242ac11001b.161adbca0b4a88c1], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bc61393-b47f-11ea-8cd8-0242ac11001b.161adbca192a4d05], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.161adbcaa3be52bc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 11:58:02.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nr8nz" for this suite. Jun 22 11:58:08.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 11:58:08.786: INFO: namespace: e2e-tests-sched-pred-nr8nz, resource: bindings, ignored listing per whitelist Jun 22 11:58:08.803: INFO: namespace e2e-tests-sched-pred-nr8nz deletion completed in 6.086919612s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.656 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 11:58:08.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-4k8ns Jun 22 11:58:13.109: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-4k8ns STEP: checking the pod's current state and verifying that restartCount is present Jun 22 11:58:13.112: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:02:14.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4k8ns" for this suite. Jun 22 12:02:20.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:02:20.206: INFO: namespace: e2e-tests-container-probe-4k8ns, resource: bindings, ignored listing per whitelist Jun 22 12:02:20.274: INFO: namespace e2e-tests-container-probe-4k8ns deletion completed in 6.117102191s • [SLOW TEST:251.471 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:02:20.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jun 22 12:02:20.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 22 12:02:20.642: INFO: stderr: "" Jun 22 12:02:20.642: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:02:20.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5tntv" for this suite. Jun 22 12:02:26.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:02:26.695: INFO: namespace: e2e-tests-kubectl-5tntv, resource: bindings, ignored listing per whitelist Jun 22 12:02:26.742: INFO: namespace e2e-tests-kubectl-5tntv deletion completed in 6.095598486s • [SLOW TEST:6.468 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:02:26.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3d91ec84-b480-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:02:27.021: INFO: Waiting up to 5m0s for pod "pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-mmqjm" to be "success or failure" Jun 22 12:02:27.040: INFO: Pod "pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.535064ms Jun 22 12:02:29.045: INFO: Pod "pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023898255s Jun 22 12:02:31.049: INFO: Pod "pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028306459s STEP: Saw pod success Jun 22 12:02:31.049: INFO: Pod "pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:02:31.052: INFO: Trying to get logs from node hunter-worker pod pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 12:02:31.624: INFO: Waiting for pod pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b to disappear Jun 22 12:02:31.904: INFO: Pod pod-secrets-3d93fa14-b480-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:02:31.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mmqjm" for this suite. Jun 22 12:02:37.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:02:37.971: INFO: namespace: e2e-tests-secrets-mmqjm, resource: bindings, ignored listing per whitelist Jun 22 12:02:37.995: INFO: namespace e2e-tests-secrets-mmqjm deletion completed in 6.087545965s • [SLOW TEST:11.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:02:37.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-442f925f-b480-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 12:02:38.156: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-9hbcb" to be "success or failure" Jun 22 12:02:38.207: INFO: Pod "pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.921287ms Jun 22 12:02:40.210: INFO: Pod "pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05454016s Jun 22 12:02:42.215: INFO: Pod "pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.058860978s Jun 22 12:02:44.219: INFO: Pod "pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063068365s STEP: Saw pod success Jun 22 12:02:44.219: INFO: Pod "pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:02:44.222: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 12:02:44.283: INFO: Waiting for pod pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b to disappear Jun 22 12:02:44.288: INFO: Pod pod-projected-configmaps-44372d51-b480-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:02:44.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9hbcb" for this suite. Jun 22 12:02:50.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:02:50.370: INFO: namespace: e2e-tests-projected-9hbcb, resource: bindings, ignored listing per whitelist Jun 22 12:02:50.409: INFO: namespace e2e-tests-projected-9hbcb deletion completed in 6.118257942s • [SLOW TEST:12.413 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:02:50.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4b9aeb07-b480-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:02:50.610: INFO: Waiting up to 5m0s for pod "pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-vzt29" to be "success or failure" Jun 22 12:02:50.613: INFO: Pod "pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064227ms Jun 22 12:02:52.629: INFO: Pod "pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018884839s Jun 22 12:02:54.633: INFO: Pod "pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023267437s STEP: Saw pod success Jun 22 12:02:54.633: INFO: Pod "pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:02:54.636: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 12:02:54.733: INFO: Waiting for pod pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b to disappear Jun 22 12:02:54.739: INFO: Pod pod-secrets-4ba13ba7-b480-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:02:54.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vzt29" for this suite. Jun 22 12:03:00.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:03:00.763: INFO: namespace: e2e-tests-secrets-vzt29, resource: bindings, ignored listing per whitelist Jun 22 12:03:00.823: INFO: namespace e2e-tests-secrets-vzt29 deletion completed in 6.079614502s • [SLOW TEST:10.414 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:03:00.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:03:00.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-bph9l" for this suite. Jun 22 12:03:07.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:03:07.084: INFO: namespace: e2e-tests-kubelet-test-bph9l, resource: bindings, ignored listing per whitelist Jun 22 12:03:07.101: INFO: namespace e2e-tests-kubelet-test-bph9l deletion completed in 6.096148823s • [SLOW TEST:6.278 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:03:07.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 22 12:03:15.059: INFO: 5 pods remaining Jun 22 12:03:15.059: INFO: 0 pods has nil DeletionTimestamp Jun 22 12:03:15.059: INFO: Jun 22 12:03:15.447: INFO: 0 pods remaining Jun 22 12:03:15.447: INFO: 0 pods has nil DeletionTimestamp Jun 22 12:03:15.447: INFO: STEP: Gathering metrics W0622 12:03:17.034098 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 12:03:17.034: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:03:17.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kvh4b" for this suite. Jun 22 12:03:23.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:03:23.316: INFO: namespace: e2e-tests-gc-kvh4b, resource: bindings, ignored listing per whitelist Jun 22 12:03:23.378: INFO: namespace e2e-tests-gc-kvh4b deletion completed in 6.293956193s • [SLOW TEST:16.276 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:03:23.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jun 22 12:03:23.471: INFO: Waiting up to 5m0s for pod "client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-containers-5rs42" to be "success or failure" Jun 22 12:03:23.515: INFO: Pod "client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.190947ms Jun 22 12:03:25.519: INFO: Pod "client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048127239s Jun 22 12:03:27.522: INFO: Pod "client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051217889s STEP: Saw pod success Jun 22 12:03:27.523: INFO: Pod "client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:03:27.525: INFO: Trying to get logs from node hunter-worker pod client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 12:03:27.542: INFO: Waiting for pod client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b to disappear Jun 22 12:03:27.547: INFO: Pod client-containers-5f39effc-b480-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:03:27.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-5rs42" for this suite. Jun 22 12:03:33.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:03:33.666: INFO: namespace: e2e-tests-containers-5rs42, resource: bindings, ignored listing per whitelist Jun 22 12:03:33.668: INFO: namespace e2e-tests-containers-5rs42 deletion completed in 6.117946695s • [SLOW TEST:10.290 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:03:33.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:03:33.734: INFO: Creating deployment "nginx-deployment" Jun 22 12:03:33.779: INFO: Waiting for observed generation 1 Jun 22 12:03:36.139: INFO: Waiting for all required pods to come up Jun 22 12:03:36.143: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 22 12:03:44.559: INFO: Waiting for deployment "nginx-deployment" to complete Jun 22 12:03:44.565: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 22 12:03:44.572: INFO: Updating deployment nginx-deployment Jun 22 12:03:44.572: INFO: Waiting for observed generation 2 Jun 22 12:03:46.631: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 22 12:03:46.675: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 22 12:03:46.678: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 22 12:03:46.686: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 22 12:03:46.686: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 22 12:03:46.688: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 22 12:03:46.692: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 22 12:03:46.692: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 22 12:03:46.697: INFO: Updating deployment nginx-deployment Jun 22 12:03:46.697: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 22 12:03:46.953: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 22 12:03:47.098: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 22 12:03:47.434: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mrg6d/deployments/nginx-deployment,UID:6558db8a-b480-11ea-99e8-0242ac110002,ResourceVersion:17295232,Generation:3,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-22 12:03:44 +0000 UTC 2020-06-22 12:03:33 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-06-22 12:03:46 +0000 UTC 2020-06-22 12:03:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 22 12:03:47.500: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mrg6d/replicasets/nginx-deployment-5c98f8fb5,UID:6bd4e0a6-b480-11ea-99e8-0242ac110002,ResourceVersion:17295244,Generation:3,CreationTimestamp:2020-06-22 12:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6558db8a-b480-11ea-99e8-0242ac110002 0xc001367647 0xc001367648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 12:03:47.500: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 22 12:03:47.500: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mrg6d/replicasets/nginx-deployment-85ddf47c5d,UID:65609aea-b480-11ea-99e8-0242ac110002,ResourceVersion:17295243,Generation:3,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6558db8a-b480-11ea-99e8-0242ac110002 0xc001367d37 0xc001367d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 22 12:03:47.620: INFO: Pod "nginx-deployment-5c98f8fb5-4w8j9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4w8j9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-4w8j9,UID:6d518e3b-b480-11ea-99e8-0242ac110002,ResourceVersion:17295239,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a00c7 0xc0017a00c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a02a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a02c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.620: INFO: Pod "nginx-deployment-5c98f8fb5-8l5qj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8l5qj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-8l5qj,UID:6d3eb636-b480-11ea-99e8-0242ac110002,ResourceVersion:17295220,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a0337 0xc0017a0338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a03c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a0490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.621: INFO: Pod "nginx-deployment-5c98f8fb5-8ltwr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8ltwr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-8ltwr,UID:6bf4afeb-b480-11ea-99e8-0242ac110002,ResourceVersion:17295179,Generation:0,CreationTimestamp:2020-06-22 12:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a0507 0xc0017a0508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a05b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a05d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-22 12:03:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.621: INFO: Pod "nginx-deployment-5c98f8fb5-944kd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-944kd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-944kd,UID:6d3e9ddc-b480-11ea-99e8-0242ac110002,ResourceVersion:17295215,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a0727 0xc0017a0728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a08a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a08c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.621: INFO: Pod "nginx-deployment-5c98f8fb5-9j6ql" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9j6ql,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-9j6ql,UID:6bd7f2fc-b480-11ea-99e8-0242ac110002,ResourceVersion:17295154,Generation:0,CreationTimestamp:2020-06-22 12:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a1267 0xc0017a1268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a12f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a15d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-22 12:03:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.621: INFO: Pod "nginx-deployment-5c98f8fb5-dvdb5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dvdb5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-dvdb5,UID:6d519864-b480-11ea-99e8-0242ac110002,ResourceVersion:17295237,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a16f7 0xc0017a16f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a1a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a1a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.621: INFO: Pod "nginx-deployment-5c98f8fb5-dwfzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dwfzr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-dwfzr,UID:6d517338-b480-11ea-99e8-0242ac110002,ResourceVersion:17295236,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a1af7 0xc0017a1af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a1bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a1bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-5c98f8fb5-g5wjx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g5wjx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-g5wjx,UID:6d517978-b480-11ea-99e8-0242ac110002,ResourceVersion:17295234,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a1c47 0xc0017a1c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a1d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a1d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-5c98f8fb5-kd86t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kd86t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-kd86t,UID:6bdb8cc3-b480-11ea-99e8-0242ac110002,ResourceVersion:17295163,Generation:0,CreationTimestamp:2020-06-22 12:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc0017a1db7 0xc0017a1db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a1e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a1f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-22 12:03:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-5c98f8fb5-nssml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nssml,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-nssml,UID:6d39f261-b480-11ea-99e8-0242ac110002,ResourceVersion:17295254,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc001edc107 0xc001edc108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001edc3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001edc410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-22 12:03:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-5c98f8fb5-tcmtj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tcmtj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-tcmtj,UID:6bdb9465-b480-11ea-99e8-0242ac110002,ResourceVersion:17295165,Generation:0,CreationTimestamp:2020-06-22 12:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc001edc577 0xc001edc578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001edc650} {node.kubernetes.io/unreachable Exists NoExecute 0xc001edc6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-22 12:03:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-5c98f8fb5-wz8sq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wz8sq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-wz8sq,UID:6bf2e9e8-b480-11ea-99e8-0242ac110002,ResourceVersion:17295183,Generation:0,CreationTimestamp:2020-06-22 12:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc001edc8b7 0xc001edc8b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001edca60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001edca80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-22 12:03:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-5c98f8fb5-z4vsw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z4vsw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-5c98f8fb5-z4vsw,UID:6d6269f0-b480-11ea-99e8-0242ac110002,ResourceVersion:17295246,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bd4e0a6-b480-11ea-99e8-0242ac110002 0xc001edd1b7 0xc001edd1b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001edd470} {node.kubernetes.io/unreachable Exists NoExecute 0xc001edd490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.622: INFO: Pod "nginx-deployment-85ddf47c5d-2468m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2468m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-2468m,UID:6d39f69d-b480-11ea-99e8-0242ac110002,ResourceVersion:17295258,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001edd577 0xc001edd578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001edd6f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001edd710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-22 12:03:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-24qpw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-24qpw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-24qpw,UID:656bcdea-b480-11ea-99e8-0242ac110002,ResourceVersion:17295123,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001edd967 0xc001edd968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eddb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eddb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.150,StartTime:2020-06-22 12:03:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://409bf52d0a001985c878ddac274cce3b63f68243b5c84ee512b7dc9a47e32470}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-2vplf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2vplf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-2vplf,UID:6d3a01ab-b480-11ea-99e8-0242ac110002,ResourceVersion:17295252,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001bca087 0xc001bca088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bca4b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bca780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-22 12:03:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-5486c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5486c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-5486c,UID:6d3e74b4-b480-11ea-99e8-0242ac110002,ResourceVersion:17295213,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001bca9c7 0xc001bca9c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bcaaa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bcaac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-b7mt8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b7mt8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-b7mt8,UID:656bcaaf-b480-11ea-99e8-0242ac110002,ResourceVersion:17295111,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001bcaf37 0xc001bcaf38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bcb320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bcb380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.149,StartTime:2020-06-22 12:03:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://26cd8c54d3aa669a1314ef76f191b6df3d2af999e502950dfbd1e1dbcaf74173}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-dzqsl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dzqsl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-dzqsl,UID:6d3e8cda-b480-11ea-99e8-0242ac110002,ResourceVersion:17295217,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001bcb887 0xc001bcb888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bcba00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bcbd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-fr2cv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fr2cv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-fr2cv,UID:6d3e7825-b480-11ea-99e8-0242ac110002,ResourceVersion:17295216,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001bcbdf7 0xc001bcbdf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bcbe90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bcbeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.623: INFO: Pod "nginx-deployment-85ddf47c5d-ggtwk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ggtwk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-ggtwk,UID:657419ff-b480-11ea-99e8-0242ac110002,ResourceVersion:17295117,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54037 0xc001c54038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c540b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c540d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.121,StartTime:2020-06-22 12:03:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7af379719bf5811a009240002253bbefa0eb0bf3e21e6b3b5e232698d187fd1e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-kpj85" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kpj85,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-kpj85,UID:65680c9c-b480-11ea-99e8-0242ac110002,ResourceVersion:17295077,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54197 0xc001c54198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54210} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.147,StartTime:2020-06-22 12:03:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://702fb2602544609ecd1bcbf8f05bf55026087ccac062262f448a38f1ffd0bdd7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-q2j52" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q2j52,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-q2j52,UID:6d3249a3-b480-11ea-99e8-0242ac110002,ResourceVersion:17295247,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c542f7 0xc001c542f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-22 12:03:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-qfp4m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qfp4m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-qfp4m,UID:6568b5a1-b480-11ea-99e8-0242ac110002,ResourceVersion:17295103,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54457 0xc001c54458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c544e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.117,StartTime:2020-06-22 12:03:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0404ea198210943669689374ffa29b8af1c29d133b9be693c7df3872664d8e33}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-qq8p7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qq8p7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-qq8p7,UID:656bcfe2-b480-11ea-99e8-0242ac110002,ResourceVersion:17295110,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c545f7 0xc001c545f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54670} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.119,StartTime:2020-06-22 12:03:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4ae8093489e886c4a4f8b13d7258fca846d4a5222cbefbc458b304efb556533c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-r8jv4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r8jv4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-r8jv4,UID:6d514a9e-b480-11ea-99e8-0242ac110002,ResourceVersion:17295231,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54797 0xc001c54798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54810} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-rf8xs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rf8xs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-rf8xs,UID:6d5168a8-b480-11ea-99e8-0242ac110002,ResourceVersion:17295238,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c548a7 0xc001c548a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54920} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.624: INFO: Pod "nginx-deployment-85ddf47c5d-snf2r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-snf2r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-snf2r,UID:656bd605-b480-11ea-99e8-0242ac110002,ResourceVersion:17295102,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c549b7 0xc001c549b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.148,StartTime:2020-06-22 12:03:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9e0950eaf71053c5538fe636845056f6d3bfc97ce5bed77aa11b1ab0e412d5a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.625: INFO: Pod "nginx-deployment-85ddf47c5d-swqvw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-swqvw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-swqvw,UID:6d518c93-b480-11ea-99e8-0242ac110002,ResourceVersion:17295241,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54b17 0xc001c54b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.625: INFO: Pod "nginx-deployment-85ddf47c5d-tv92c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tv92c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-tv92c,UID:6d518b1e-b480-11ea-99e8-0242ac110002,ResourceVersion:17295242,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54cc7 0xc001c54cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.625: INFO: Pod "nginx-deployment-85ddf47c5d-vp4gb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vp4gb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-vp4gb,UID:6d3e8ae5-b480-11ea-99e8-0242ac110002,ResourceVersion:17295214,Generation:0,CreationTimestamp:2020-06-22 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54de7 0xc001c54de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c54eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.625: INFO: Pod "nginx-deployment-85ddf47c5d-vxvf8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vxvf8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-vxvf8,UID:6568a5c5-b480-11ea-99e8-0242ac110002,ResourceVersion:17295097,Generation:0,CreationTimestamp:2020-06-22 12:03:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c54f27 0xc001c54f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c550c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c550e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.118,StartTime:2020-06-22 12:03:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:03:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://964d0ab7e29b5513f9cb09a651f017866f926c9c42c2c4a69d95d458b6f9db84}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 12:03:47.625: INFO: Pod "nginx-deployment-85ddf47c5d-zg7qr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zg7qr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mrg6d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mrg6d/pods/nginx-deployment-85ddf47c5d-zg7qr,UID:6d5186db-b480-11ea-99e8-0242ac110002,ResourceVersion:17295240,Generation:0,CreationTimestamp:2020-06-22 12:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65609aea-b480-11ea-99e8-0242ac110002 0xc001c551d7 0xc001c551d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bmt9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmt9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmt9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c55250} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c55280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:03:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:03:47.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mrg6d" for this suite. Jun 22 12:04:14.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:04:14.074: INFO: namespace: e2e-tests-deployment-mrg6d, resource: bindings, ignored listing per whitelist Jun 22 12:04:14.133: INFO: namespace e2e-tests-deployment-mrg6d deletion completed in 26.455395071s • [SLOW TEST:40.465 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:04:14.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 22 12:04:14.246: INFO: Waiting up to 5m0s for pod "downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-9d6fc" to be "success or failure" Jun 22 12:04:14.367: INFO: Pod "downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 120.905666ms Jun 22 12:04:16.371: INFO: Pod "downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125219187s Jun 22 12:04:18.376: INFO: Pod "downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129675592s STEP: Saw pod success Jun 22 12:04:18.376: INFO: Pod "downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:04:18.379: INFO: Trying to get logs from node hunter-worker pod downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 12:04:18.422: INFO: Waiting for pod downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b to disappear Jun 22 12:04:18.433: INFO: Pod downward-api-7d7ad301-b480-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:04:18.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9d6fc" for this suite. Jun 22 12:04:24.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:04:24.494: INFO: namespace: e2e-tests-downward-api-9d6fc, resource: bindings, ignored listing per whitelist Jun 22 12:04:24.523: INFO: namespace e2e-tests-downward-api-9d6fc deletion completed in 6.087140347s • [SLOW TEST:10.390 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:04:24.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:04:24.616: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 22 12:04:24.663: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 22 12:04:29.668: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 12:04:29.668: INFO: Creating deployment "test-rolling-update-deployment" Jun 22 12:04:29.672: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 22 12:04:29.690: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 22 12:04:31.940: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 22 12:04:31.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728424269, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728424269, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728424269, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728424269, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 12:04:33.950: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 22 12:04:33.959: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sw49n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sw49n/deployments/test-rolling-update-deployment,UID:86afc140-b480-11ea-99e8-0242ac110002,ResourceVersion:17295616,Generation:1,CreationTimestamp:2020-06-22 12:04:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-22 12:04:29 +0000 UTC 2020-06-22 12:04:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-22 12:04:33 +0000 UTC 2020-06-22 12:04:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 22 12:04:33.962: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sw49n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sw49n/replicasets/test-rolling-update-deployment-75db98fb4c,UID:86b3c40a-b480-11ea-99e8-0242ac110002,ResourceVersion:17295606,Generation:1,CreationTimestamp:2020-06-22 12:04:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 86afc140-b480-11ea-99e8-0242ac110002 0xc002a80b37 0xc002a80b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 22 12:04:33.962: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 22 12:04:33.963: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sw49n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sw49n/replicasets/test-rolling-update-controller,UID:83acfb89-b480-11ea-99e8-0242ac110002,ResourceVersion:17295615,Generation:2,CreationTimestamp:2020-06-22 12:04:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 86afc140-b480-11ea-99e8-0242ac110002 0xc002a80a67 0xc002a80a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 12:04:33.966: INFO: Pod "test-rolling-update-deployment-75db98fb4c-bdjbs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-bdjbs,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sw49n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sw49n/pods/test-rolling-update-deployment-75db98fb4c-bdjbs,UID:86b896e1-b480-11ea-99e8-0242ac110002,ResourceVersion:17295605,Generation:0,CreationTimestamp:2020-06-22 12:04:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 86b3c40a-b480-11ea-99e8-0242ac110002 0xc002a81887 0xc002a81888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-47h9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47h9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-47h9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a81900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a81920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:04:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:04:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.135,StartTime:2020-06-22 12:04:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-22 12:04:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f08c721415f695c75adef8f38d4499f0eb9fe99d63726b685f84b1a941cfd6ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:04:33.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sw49n" for this suite. Jun 22 12:04:40.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:04:40.182: INFO: namespace: e2e-tests-deployment-sw49n, resource: bindings, ignored listing per whitelist Jun 22 12:04:40.201: INFO: namespace e2e-tests-deployment-sw49n deletion completed in 6.232275896s • [SLOW TEST:15.677 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:04:40.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0622 12:04:50.369965 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 12:04:50.370: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:04:50.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bx6xj" for this suite. Jun 22 12:04:56.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:04:56.414: INFO: namespace: e2e-tests-gc-bx6xj, resource: bindings, ignored listing per whitelist Jun 22 12:04:56.474: INFO: namespace e2e-tests-gc-bx6xj deletion completed in 6.100271174s • [SLOW TEST:16.273 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:04:56.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-96b86bb6-b480-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 12:04:56.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-fxtjm" to be "success or failure" Jun 22 12:04:56.587: INFO: Pod "pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339785ms Jun 22 12:04:58.758: INFO: Pod "pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17532156s Jun 22 12:05:00.762: INFO: Pod "pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179005545s STEP: Saw pod success Jun 22 12:05:00.762: INFO: Pod "pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:05:00.764: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 12:05:00.786: INFO: Waiting for pod pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b to disappear Jun 22 12:05:00.790: INFO: Pod pod-projected-configmaps-96b91f3e-b480-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:05:00.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fxtjm" for this suite. Jun 22 12:05:06.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:05:06.912: INFO: namespace: e2e-tests-projected-fxtjm, resource: bindings, ignored listing per whitelist Jun 22 12:05:06.912: INFO: namespace e2e-tests-projected-fxtjm deletion completed in 6.117745608s • [SLOW TEST:10.438 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:05:06.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 22 12:05:07.090: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:05:16.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rrrmx" for this suite. Jun 22 12:05:38.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:05:38.147: INFO: namespace: e2e-tests-init-container-rrrmx, resource: bindings, ignored listing per whitelist Jun 22 12:05:38.171: INFO: namespace e2e-tests-init-container-rrrmx deletion completed in 22.083228436s • [SLOW TEST:31.259 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:05:38.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:05:56.295: INFO: Container started at 2020-06-22 12:05:40 +0000 UTC, pod became ready at 2020-06-22 12:05:55 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:05:56.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mw4xx" for this suite. Jun 22 12:06:18.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:06:18.402: INFO: namespace: e2e-tests-container-probe-mw4xx, resource: bindings, ignored listing per whitelist Jun 22 12:06:18.411: INFO: namespace e2e-tests-container-probe-mw4xx deletion completed in 22.111711973s • [SLOW TEST:40.239 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:06:18.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 22 12:06:25.588: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:06:26.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-95t55" for this suite. Jun 22 12:09:36.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:09:36.738: INFO: namespace: e2e-tests-replicaset-95t55, resource: bindings, ignored listing per whitelist Jun 22 12:09:36.760: INFO: namespace e2e-tests-replicaset-95t55 deletion completed in 3m10.128808719s • [SLOW TEST:198.349 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:09:36.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-3dc50546-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 12:09:36.864: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-hzg6w" to be "success or failure" Jun 22 12:09:36.867: INFO: Pod "pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327061ms Jun 22 12:09:38.872: INFO: Pod "pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007748736s Jun 22 12:09:40.876: INFO: Pod "pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012398716s STEP: Saw pod success Jun 22 12:09:40.876: INFO: Pod "pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:09:40.879: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 22 12:09:40.996: INFO: Waiting for pod pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:09:41.005: INFO: Pod pod-projected-configmaps-3dc75f46-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:09:41.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hzg6w" for this suite. Jun 22 12:09:47.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:09:47.069: INFO: namespace: e2e-tests-projected-hzg6w, resource: bindings, ignored listing per whitelist Jun 22 12:09:47.109: INFO: namespace e2e-tests-projected-hzg6w deletion completed in 6.099979221s • [SLOW TEST:10.349 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:09:47.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:09:47.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-bxs8h" to be "success or failure" Jun 22 12:09:47.360: INFO: Pod "downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 137.529695ms Jun 22 12:09:49.365: INFO: Pod "downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142418661s Jun 22 12:09:51.369: INFO: Pod "downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146617693s STEP: Saw pod success Jun 22 12:09:51.369: INFO: Pod "downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:09:51.372: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:09:51.485: INFO: Waiting for pod downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:09:51.690: INFO: Pod downwardapi-volume-43f5aed1-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:09:51.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bxs8h" for this suite. Jun 22 12:09:57.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:09:57.772: INFO: namespace: e2e-tests-downward-api-bxs8h, resource: bindings, ignored listing per whitelist Jun 22 12:09:57.927: INFO: namespace e2e-tests-downward-api-bxs8h deletion completed in 6.233680619s • [SLOW TEST:10.818 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:09:57.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4a6c1c10-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:09:58.090: INFO: Waiting up to 5m0s for pod "pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-m4hzn" to be "success or failure" Jun 22 12:09:58.120: INFO: Pod "pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.977711ms Jun 22 12:10:00.774: INFO: Pod "pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683691869s Jun 22 12:10:02.778: INFO: Pod "pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.68782585s STEP: Saw pod success Jun 22 12:10:02.778: INFO: Pod "pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:10:02.781: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 12:10:03.063: INFO: Waiting for pod pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:10:03.180: INFO: Pod pod-secrets-4a6fac3a-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:10:03.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-m4hzn" for this suite. Jun 22 12:10:09.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:10:09.228: INFO: namespace: e2e-tests-secrets-m4hzn, resource: bindings, ignored listing per whitelist Jun 22 12:10:09.264: INFO: namespace e2e-tests-secrets-m4hzn deletion completed in 6.079792395s • [SLOW TEST:11.337 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:10:09.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-512b3d26-b481-11ea-8cd8-0242ac11001b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-512b3d26-b481-11ea-8cd8-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:10:15.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l74ql" for this suite. Jun 22 12:10:37.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:10:37.541: INFO: namespace: e2e-tests-configmap-l74ql, resource: bindings, ignored listing per whitelist Jun 22 12:10:37.554: INFO: namespace e2e-tests-configmap-l74ql deletion completed in 22.088864273s • [SLOW TEST:28.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:10:37.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wn77b;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wn77b;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wn77b.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wn77b.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wn77b.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 179.191.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.191.179_udp@PTR;check="$$(dig +tcp +noall +answer +search 179.191.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.191.179_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wn77b;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wn77b;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wn77b.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wn77b.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wn77b.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wn77b.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wn77b.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 179.191.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.191.179_udp@PTR;check="$$(dig +tcp +noall +answer +search 179.191.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.191.179_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 12:10:49.887: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.907: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.910: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.913: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.916: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.919: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.921: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.924: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:49.953: INFO: Lookups using e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wn77b jessie_tcp@dns-test-service.e2e-tests-dns-wn77b jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc] Jun 22 12:10:54.970: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.972: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.987: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.989: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.991: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.994: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.996: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:54.999: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:55.001: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:55.003: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:55.018: INFO: Lookups using e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wn77b jessie_tcp@dns-test-service.e2e-tests-dns-wn77b jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc] Jun 22 12:10:59.978: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:10:59.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.003: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.007: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.010: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.013: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.016: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.020: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.023: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.026: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:00.046: INFO: Lookups using e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wn77b jessie_tcp@dns-test-service.e2e-tests-dns-wn77b jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc] Jun 22 12:11:04.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:04.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.005: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.008: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.010: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.014: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.016: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.018: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:05.042: INFO: Lookups using e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wn77b jessie_tcp@dns-test-service.e2e-tests-dns-wn77b jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc] Jun 22 12:11:09.974: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:09.976: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:09.999: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.004: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.007: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.009: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.012: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.015: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.019: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.022: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc from pod e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b: the server could not find the requested resource (get pods dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b) Jun 22 12:11:10.035: INFO: Lookups using e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wn77b jessie_tcp@dns-test-service.e2e-tests-dns-wn77b jessie_udp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@dns-test-service.e2e-tests-dns-wn77b.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wn77b.svc] Jun 22 12:11:15.073: INFO: DNS probes using e2e-tests-dns-wn77b/dns-test-6214c6c4-b481-11ea-8cd8-0242ac11001b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:11:16.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wn77b" for this suite. Jun 22 12:11:23.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:11:23.662: INFO: namespace: e2e-tests-dns-wn77b, resource: bindings, ignored listing per whitelist Jun 22 12:11:23.712: INFO: namespace e2e-tests-dns-wn77b deletion completed in 7.051005629s • [SLOW TEST:46.158 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:11:23.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:11:24.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-7cprt" to be "success or failure" Jun 22 12:11:24.139: INFO: Pod "downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.636556ms Jun 22 12:11:26.143: INFO: Pod "downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012008633s Jun 22 12:11:28.229: INFO: Pod "downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.098179236s Jun 22 12:11:30.235: INFO: Pod "downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103703966s STEP: Saw pod success Jun 22 12:11:30.235: INFO: Pod "downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:11:30.247: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:11:30.275: INFO: Waiting for pod downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:11:30.452: INFO: Pod downwardapi-volume-7da5adc8-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:11:30.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7cprt" for this suite. Jun 22 12:11:36.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:11:36.716: INFO: namespace: e2e-tests-downward-api-7cprt, resource: bindings, ignored listing per whitelist Jun 22 12:11:36.758: INFO: namespace e2e-tests-downward-api-7cprt deletion completed in 6.301823537s • [SLOW TEST:13.045 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:11:36.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jun 22 12:11:37.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 22 12:11:39.739: INFO: stderr: "" Jun 22 12:11:39.740: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:11:39.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zsntr" for this suite. Jun 22 12:11:45.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:11:45.779: INFO: namespace: e2e-tests-kubectl-zsntr, resource: bindings, ignored listing per whitelist Jun 22 12:11:45.838: INFO: namespace e2e-tests-kubectl-zsntr deletion completed in 6.094535719s • [SLOW TEST:9.080 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:11:45.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 22 12:11:46.204: INFO: Waiting up to 5m0s for pod "downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-njkb8" to be "success or failure" Jun 22 12:11:46.254: INFO: Pod "downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.038951ms Jun 22 12:11:48.259: INFO: Pod "downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054638728s Jun 22 12:11:50.476: INFO: Pod "downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.272146954s Jun 22 12:11:52.480: INFO: Pod "downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276155172s STEP: Saw pod success Jun 22 12:11:52.480: INFO: Pod "downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:11:52.482: INFO: Trying to get logs from node hunter-worker pod downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 12:11:52.526: INFO: Waiting for pod downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:11:52.541: INFO: Pod downward-api-8ac700ba-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:11:52.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-njkb8" for this suite. Jun 22 12:11:58.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:11:58.693: INFO: namespace: e2e-tests-downward-api-njkb8, resource: bindings, ignored listing per whitelist Jun 22 12:11:58.720: INFO: namespace e2e-tests-downward-api-njkb8 deletion completed in 6.174848373s • [SLOW TEST:12.882 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:11:58.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:11:58.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-24njz" to be "success or failure" Jun 22 12:11:58.961: INFO: Pod "downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.452553ms Jun 22 12:12:00.966: INFO: Pod "downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007815825s Jun 22 12:12:02.970: INFO: Pod "downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012636059s Jun 22 12:12:04.974: INFO: Pod "downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015800083s STEP: Saw pod success Jun 22 12:12:04.974: INFO: Pod "downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:12:04.976: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:12:05.026: INFO: Waiting for pod downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:12:05.105: INFO: Pod downwardapi-volume-927aa22c-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:12:05.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-24njz" for this suite. Jun 22 12:12:11.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:12:11.264: INFO: namespace: e2e-tests-projected-24njz, resource: bindings, ignored listing per whitelist Jun 22 12:12:11.280: INFO: namespace e2e-tests-projected-24njz deletion completed in 6.170452988s • [SLOW TEST:12.560 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:12:11.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jun 22 12:12:11.424: INFO: Waiting up to 5m0s for pod "client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-containers-k4jwh" to be "success or failure" Jun 22 12:12:11.427: INFO: Pod "client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271483ms Jun 22 12:12:13.431: INFO: Pod "client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007204926s Jun 22 12:12:15.435: INFO: Pod "client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010508651s STEP: Saw pod success Jun 22 12:12:15.435: INFO: Pod "client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:12:15.437: INFO: Trying to get logs from node hunter-worker pod client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 12:12:15.560: INFO: Waiting for pod client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:12:15.650: INFO: Pod client-containers-99e788fb-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:12:15.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-k4jwh" for this suite. Jun 22 12:12:21.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:12:21.898: INFO: namespace: e2e-tests-containers-k4jwh, resource: bindings, ignored listing per whitelist Jun 22 12:12:21.947: INFO: namespace e2e-tests-containers-k4jwh deletion completed in 6.292586114s • [SLOW TEST:10.667 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:12:21.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:12:22.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-kpvgv" to be "success or failure" Jun 22 12:12:22.075: INFO: Pod "downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.893062ms Jun 22 12:12:24.104: INFO: Pod "downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033523485s Jun 22 12:12:26.108: INFO: Pod "downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.037253082s Jun 22 12:12:28.112: INFO: Pod "downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041084889s STEP: Saw pod success Jun 22 12:12:28.112: INFO: Pod "downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:12:28.114: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:12:28.298: INFO: Waiting for pod downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:12:28.422: INFO: Pod downwardapi-volume-a0403f58-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:12:28.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kpvgv" for this suite. Jun 22 12:12:34.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:12:34.496: INFO: namespace: e2e-tests-downward-api-kpvgv, resource: bindings, ignored listing per whitelist Jun 22 12:12:34.532: INFO: namespace e2e-tests-downward-api-kpvgv deletion completed in 6.106409765s • [SLOW TEST:12.586 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:12:34.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a7c131b9-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:12:34.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-d6t8v" to be "success or failure" Jun 22 12:12:34.686: INFO: Pod "pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.829821ms Jun 22 12:12:36.794: INFO: Pod "pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114298402s Jun 22 12:12:38.974: INFO: Pod "pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293459163s STEP: Saw pod success Jun 22 12:12:38.974: INFO: Pod "pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:12:38.977: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 22 12:12:39.053: INFO: Waiting for pod pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:12:39.141: INFO: Pod pod-projected-secrets-a7c34ea0-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:12:39.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d6t8v" for this suite. Jun 22 12:12:45.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:12:45.219: INFO: namespace: e2e-tests-projected-d6t8v, resource: bindings, ignored listing per whitelist Jun 22 12:12:45.276: INFO: namespace e2e-tests-projected-d6t8v deletion completed in 6.132288062s • [SLOW TEST:10.743 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:12:45.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:12:45.401: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-nndkg" to be "success or failure" Jun 22 12:12:45.405: INFO: Pod "downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876963ms Jun 22 12:12:47.476: INFO: Pod "downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075078642s Jun 22 12:12:49.480: INFO: Pod "downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.078671735s Jun 22 12:12:51.483: INFO: Pod "downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081952631s STEP: Saw pod success Jun 22 12:12:51.483: INFO: Pod "downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:12:51.486: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:12:51.520: INFO: Waiting for pod downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:12:51.548: INFO: Pod downwardapi-volume-ae27dc89-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:12:51.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nndkg" for this suite. Jun 22 12:12:57.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:12:57.663: INFO: namespace: e2e-tests-projected-nndkg, resource: bindings, ignored listing per whitelist Jun 22 12:12:57.674: INFO: namespace e2e-tests-projected-nndkg deletion completed in 6.121964973s • [SLOW TEST:12.398 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:12:57.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 22 12:12:57.852: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:12:57.854: INFO: Number of nodes with available pods: 0 Jun 22 12:12:57.854: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:12:58.858: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:12:58.860: INFO: Number of nodes with available pods: 0 Jun 22 12:12:58.860: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:13:00.212: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:00.346: INFO: Number of nodes with available pods: 0 Jun 22 12:13:00.346: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:13:00.886: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:00.889: INFO: Number of nodes with available pods: 0 Jun 22 12:13:00.889: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:13:01.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:01.869: INFO: Number of nodes with available pods: 0 Jun 22 12:13:01.870: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:13:02.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:02.863: INFO: Number of nodes with available pods: 1 Jun 22 12:13:02.863: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:13:03.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:03.871: INFO: Number of nodes with available pods: 2 Jun 22 12:13:03.871: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 22 12:13:03.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:03.916: INFO: Number of nodes with available pods: 1 Jun 22 12:13:03.916: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:04.920: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:04.923: INFO: Number of nodes with available pods: 1 Jun 22 12:13:04.923: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:05.921: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:05.925: INFO: Number of nodes with available pods: 1 Jun 22 12:13:05.925: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:06.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:06.936: INFO: Number of nodes with available pods: 1 Jun 22 12:13:06.936: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:07.933: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:07.936: INFO: Number of nodes with available pods: 1 Jun 22 12:13:07.936: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:08.921: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:08.924: INFO: Number of nodes with available pods: 1 Jun 22 12:13:08.924: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:09.969: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:09.972: INFO: Number of nodes with available pods: 1 Jun 22 12:13:09.972: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:10.921: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:10.925: INFO: Number of nodes with available pods: 1 Jun 22 12:13:10.925: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:11.921: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:11.924: INFO: Number of nodes with available pods: 1 Jun 22 12:13:11.924: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:12.920: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:12.923: INFO: Number of nodes with available pods: 1 Jun 22 12:13:12.923: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:13.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:14.003: INFO: Number of nodes with available pods: 1 Jun 22 12:13:14.003: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:14.920: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:14.922: INFO: Number of nodes with available pods: 1 Jun 22 12:13:14.922: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:13:15.939: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:13:15.943: INFO: Number of nodes with available pods: 2 Jun 22 12:13:15.943: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kfs5g, will wait for the garbage collector to delete the pods Jun 22 12:13:16.047: INFO: Deleting DaemonSet.extensions daemon-set took: 25.329584ms Jun 22 12:13:16.148: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.302213ms Jun 22 12:13:31.765: INFO: Number of nodes with available pods: 0 Jun 22 12:13:31.765: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 12:13:31.767: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kfs5g/daemonsets","resourceVersion":"17297250"},"items":null} Jun 22 12:13:31.770: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kfs5g/pods","resourceVersion":"17297250"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:13:31.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kfs5g" for this suite. Jun 22 12:13:41.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:13:41.851: INFO: namespace: e2e-tests-daemonsets-kfs5g, resource: bindings, ignored listing per whitelist Jun 22 12:13:41.874: INFO: namespace e2e-tests-daemonsets-kfs5g deletion completed in 10.094326971s • [SLOW TEST:44.199 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:13:41.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:13:42.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-58x2g" to be "success or failure" Jun 22 12:13:42.017: INFO: Pod "downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214018ms Jun 22 12:13:44.020: INFO: Pod "downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007379527s Jun 22 12:13:46.025: INFO: Pod "downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012247817s Jun 22 12:13:48.030: INFO: Pod "downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016721873s STEP: Saw pod success Jun 22 12:13:48.030: INFO: Pod "downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:13:48.033: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:13:48.069: INFO: Waiting for pod downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:13:48.084: INFO: Pod downwardapi-volume-cfe25820-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:13:48.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-58x2g" for this suite. Jun 22 12:13:54.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:13:54.175: INFO: namespace: e2e-tests-downward-api-58x2g, resource: bindings, ignored listing per whitelist Jun 22 12:13:54.212: INFO: namespace e2e-tests-downward-api-58x2g deletion completed in 6.124668237s • [SLOW TEST:12.338 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:13:54.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d75478b2-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:13:54.694: INFO: Waiting up to 5m0s for pod "pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-7fzhj" to be "success or failure" Jun 22 12:13:54.749: INFO: Pod "pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.706222ms Jun 22 12:13:56.863: INFO: Pod "pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169149345s Jun 22 12:13:58.891: INFO: Pod "pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.197496391s Jun 22 12:14:00.896: INFO: Pod "pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201769448s STEP: Saw pod success Jun 22 12:14:00.896: INFO: Pod "pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:14:00.899: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 12:14:00.935: INFO: Waiting for pod pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:14:00.939: INFO: Pod pod-secrets-d776aa86-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:14:00.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7fzhj" for this suite. Jun 22 12:14:06.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:14:06.994: INFO: namespace: e2e-tests-secrets-7fzhj, resource: bindings, ignored listing per whitelist Jun 22 12:14:07.017: INFO: namespace e2e-tests-secrets-7fzhj deletion completed in 6.074234411s • [SLOW TEST:12.805 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:14:07.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-df604435-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:14:08.498: INFO: Waiting up to 5m0s for pod "pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-secrets-bhx8q" to be "success or failure" Jun 22 12:14:08.521: INFO: Pod "pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.756095ms Jun 22 12:14:10.622: INFO: Pod "pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123853026s Jun 22 12:14:12.626: INFO: Pod "pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127864062s Jun 22 12:14:14.631: INFO: Pod "pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132601284s STEP: Saw pod success Jun 22 12:14:14.631: INFO: Pod "pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:14:14.634: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 12:14:14.677: INFO: Waiting for pod pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:14:14.689: INFO: Pod pod-secrets-dfa9812e-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:14:14.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bhx8q" for this suite. Jun 22 12:14:20.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:14:20.752: INFO: namespace: e2e-tests-secrets-bhx8q, resource: bindings, ignored listing per whitelist Jun 22 12:14:20.822: INFO: namespace e2e-tests-secrets-bhx8q deletion completed in 6.11035046s STEP: Destroying namespace "e2e-tests-secret-namespace-6qnn7" for this suite. Jun 22 12:14:26.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:14:26.852: INFO: namespace: e2e-tests-secret-namespace-6qnn7, resource: bindings, ignored listing per whitelist Jun 22 12:14:26.916: INFO: namespace e2e-tests-secret-namespace-6qnn7 deletion completed in 6.094113617s • [SLOW TEST:19.899 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:14:26.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jun 22 12:14:27.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-4kxt4 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 22 12:14:30.937: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0622 12:14:30.862919 2719 log.go:172] (0xc0001388f0) (0xc0005eabe0) Create stream\nI0622 12:14:30.862955 2719 log.go:172] (0xc0001388f0) (0xc0005eabe0) Stream added, broadcasting: 1\nI0622 12:14:30.865364 2719 log.go:172] (0xc0001388f0) Reply frame received for 1\nI0622 12:14:30.865405 2719 log.go:172] (0xc0001388f0) (0xc0005eac80) Create stream\nI0622 12:14:30.865414 2719 log.go:172] (0xc0001388f0) (0xc0005eac80) Stream added, broadcasting: 3\nI0622 12:14:30.867186 2719 log.go:172] (0xc0001388f0) Reply frame received for 3\nI0622 12:14:30.867237 2719 log.go:172] (0xc0001388f0) (0xc000772140) Create stream\nI0622 12:14:30.867249 2719 log.go:172] (0xc0001388f0) (0xc000772140) Stream added, broadcasting: 5\nI0622 12:14:30.867988 2719 log.go:172] (0xc0001388f0) Reply frame received for 5\nI0622 12:14:30.868045 2719 log.go:172] (0xc0001388f0) (0xc0007721e0) Create stream\nI0622 12:14:30.868069 2719 log.go:172] (0xc0001388f0) (0xc0007721e0) Stream added, broadcasting: 7\nI0622 12:14:30.868895 2719 log.go:172] (0xc0001388f0) Reply frame received for 7\nI0622 12:14:30.868986 2719 log.go:172] (0xc0005eac80) (3) Writing data frame\nI0622 12:14:30.869073 2719 log.go:172] (0xc0005eac80) (3) Writing data frame\nI0622 12:14:30.869944 2719 log.go:172] (0xc0001388f0) Data frame received for 5\nI0622 12:14:30.869968 2719 log.go:172] (0xc000772140) (5) Data frame handling\nI0622 12:14:30.869988 2719 log.go:172] (0xc000772140) (5) Data frame sent\nI0622 12:14:30.870676 2719 log.go:172] (0xc0001388f0) Data frame received for 5\nI0622 12:14:30.870702 2719 log.go:172] (0xc000772140) (5) Data frame handling\nI0622 12:14:30.870729 2719 log.go:172] (0xc000772140) (5) Data frame sent\nI0622 12:14:30.907961 2719 log.go:172] (0xc0001388f0) Data frame received for 7\nI0622 12:14:30.908020 2719 log.go:172] (0xc0007721e0) (7) Data frame handling\nI0622 12:14:30.908051 2719 log.go:172] (0xc0001388f0) Data frame received for 5\nI0622 12:14:30.908072 2719 log.go:172] (0xc000772140) (5) Data frame handling\nI0622 12:14:30.908121 2719 log.go:172] (0xc0001388f0) Data frame received for 1\nI0622 12:14:30.908140 2719 log.go:172] (0xc0005eabe0) (1) Data frame handling\nI0622 12:14:30.908154 2719 log.go:172] (0xc0005eabe0) (1) Data frame sent\nI0622 12:14:30.908305 2719 log.go:172] (0xc0001388f0) (0xc0005eabe0) Stream removed, broadcasting: 1\nI0622 12:14:30.908425 2719 log.go:172] (0xc0001388f0) (0xc0005eac80) Stream removed, broadcasting: 3\nI0622 12:14:30.908463 2719 log.go:172] (0xc0001388f0) Go away received\nI0622 12:14:30.908510 2719 log.go:172] (0xc0001388f0) (0xc0005eabe0) Stream removed, broadcasting: 1\nI0622 12:14:30.908542 2719 log.go:172] (0xc0001388f0) (0xc0005eac80) Stream removed, broadcasting: 3\nI0622 12:14:30.908564 2719 log.go:172] (0xc0001388f0) (0xc000772140) Stream removed, broadcasting: 5\nI0622 12:14:30.908588 2719 log.go:172] (0xc0001388f0) (0xc0007721e0) Stream removed, broadcasting: 7\n" Jun 22 12:14:30.937: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:14:32.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4kxt4" for this suite. Jun 22 12:14:42.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:14:42.990: INFO: namespace: e2e-tests-kubectl-4kxt4, resource: bindings, ignored listing per whitelist Jun 22 12:14:43.046: INFO: namespace e2e-tests-kubectl-4kxt4 deletion completed in 10.098276164s • [SLOW TEST:16.130 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:14:43.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-f48f7004-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:14:43.542: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-ms46m" to be "success or failure" Jun 22 12:14:43.597: INFO: Pod "pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 55.130378ms Jun 22 12:14:45.602: INFO: Pod "pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059702624s Jun 22 12:14:47.605: INFO: Pod "pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063010146s Jun 22 12:14:49.609: INFO: Pod "pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067051569s STEP: Saw pod success Jun 22 12:14:49.609: INFO: Pod "pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:14:49.611: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 22 12:14:49.650: INFO: Waiting for pod pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:14:49.678: INFO: Pod pod-projected-secrets-f48ff373-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:14:49.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ms46m" for this suite. Jun 22 12:14:55.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:14:55.714: INFO: namespace: e2e-tests-projected-ms46m, resource: bindings, ignored listing per whitelist Jun 22 12:14:55.812: INFO: namespace e2e-tests-projected-ms46m deletion completed in 6.128977361s • [SLOW TEST:12.765 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:14:55.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-fc31bbd6-b481-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 22 12:14:56.351: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-configmap-tl45z" to be "success or failure" Jun 22 12:14:56.361: INFO: Pod "pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.381092ms Jun 22 12:14:58.365: INFO: Pod "pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013517447s Jun 22 12:15:00.369: INFO: Pod "pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017240754s Jun 22 12:15:02.372: INFO: Pod "pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02068953s STEP: Saw pod success Jun 22 12:15:02.372: INFO: Pod "pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:15:02.375: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 22 12:15:02.402: INFO: Waiting for pod pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b to disappear Jun 22 12:15:02.430: INFO: Pod pod-configmaps-fc32cd70-b481-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:15:02.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tl45z" for this suite. Jun 22 12:15:08.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:15:08.508: INFO: namespace: e2e-tests-configmap-tl45z, resource: bindings, ignored listing per whitelist Jun 22 12:15:08.518: INFO: namespace e2e-tests-configmap-tl45z deletion completed in 6.083544912s • [SLOW TEST:12.706 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:15:08.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:15:08.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-v6ptf" to be "success or failure" Jun 22 12:15:08.726: INFO: Pod "downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.825561ms Jun 22 12:15:10.730: INFO: Pod "downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019854249s Jun 22 12:15:12.734: INFO: Pod "downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024016296s STEP: Saw pod success Jun 22 12:15:12.734: INFO: Pod "downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:15:12.737: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:15:12.777: INFO: Waiting for pod downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b to disappear Jun 22 12:15:12.786: INFO: Pod downwardapi-volume-0391b297-b482-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:15:12.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v6ptf" for this suite. Jun 22 12:15:18.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:15:18.974: INFO: namespace: e2e-tests-downward-api-v6ptf, resource: bindings, ignored listing per whitelist Jun 22 12:15:18.976: INFO: namespace e2e-tests-downward-api-v6ptf deletion completed in 6.16101006s • [SLOW TEST:10.458 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:15:18.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-rw77l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rw77l to expose endpoints map[] Jun 22 12:15:19.188: INFO: Get endpoints failed (25.240453ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 22 12:15:20.192: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rw77l exposes endpoints map[] (1.029468889s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-rw77l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rw77l to expose endpoints map[pod1:[80]] Jun 22 12:15:24.618: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rw77l exposes endpoints map[pod1:[80]] (4.418995035s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-rw77l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rw77l to expose endpoints map[pod1:[80] pod2:[80]] Jun 22 12:15:28.699: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rw77l exposes endpoints map[pod1:[80] pod2:[80]] (4.076311522s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-rw77l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rw77l to expose endpoints map[pod2:[80]] Jun 22 12:15:29.750: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rw77l exposes endpoints map[pod2:[80]] (1.047112518s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-rw77l STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rw77l to expose endpoints map[] Jun 22 12:15:31.306: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rw77l exposes endpoints map[] (1.551308737s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:15:32.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-rw77l" for this suite. Jun 22 12:15:41.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:15:41.684: INFO: namespace: e2e-tests-services-rw77l, resource: bindings, ignored listing per whitelist Jun 22 12:15:41.695: INFO: namespace e2e-tests-services-rw77l deletion completed in 8.810918648s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:22.719 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:15:41.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 22 12:15:42.665: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 22 12:15:47.887: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:15:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-h5zr7" for this suite. Jun 22 12:16:05.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:16:05.403: INFO: namespace: e2e-tests-replication-controller-h5zr7, resource: bindings, ignored listing per whitelist Jun 22 12:16:05.457: INFO: namespace e2e-tests-replication-controller-h5zr7 deletion completed in 15.581314829s • [SLOW TEST:23.762 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:16:05.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-wvkc STEP: Creating a pod to test atomic-volume-subpath Jun 22 12:16:06.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wvkc" in namespace "e2e-tests-subpath-9sfhd" to be "success or failure" Jun 22 12:16:06.611: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 249.452851ms Jun 22 12:16:08.615: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253797538s Jun 22 12:16:10.774: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41198454s Jun 22 12:16:12.778: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416253473s Jun 22 12:16:15.360: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.998391104s Jun 22 12:16:17.840: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.478210448s Jun 22 12:16:19.844: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.482145342s Jun 22 12:16:21.848: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.486675247s Jun 22 12:16:23.851: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.489929919s Jun 22 12:16:25.855: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=true. Elapsed: 19.493516985s Jun 22 12:16:27.859: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 21.497131488s Jun 22 12:16:29.863: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 23.501866546s Jun 22 12:16:31.868: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 25.5059969s Jun 22 12:16:34.597: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 28.235708587s Jun 22 12:16:36.720: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 30.358219257s Jun 22 12:16:39.914: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 33.552231307s Jun 22 12:16:41.917: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Running", Reason="", readiness=false. Elapsed: 35.555165925s Jun 22 12:16:43.946: INFO: Pod "pod-subpath-test-secret-wvkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.584718971s STEP: Saw pod success Jun 22 12:16:43.946: INFO: Pod "pod-subpath-test-secret-wvkc" satisfied condition "success or failure" Jun 22 12:16:43.949: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-wvkc container test-container-subpath-secret-wvkc: STEP: delete the pod Jun 22 12:16:44.707: INFO: Waiting for pod pod-subpath-test-secret-wvkc to disappear Jun 22 12:16:44.763: INFO: Pod pod-subpath-test-secret-wvkc no longer exists STEP: Deleting pod pod-subpath-test-secret-wvkc Jun 22 12:16:44.763: INFO: Deleting pod "pod-subpath-test-secret-wvkc" in namespace "e2e-tests-subpath-9sfhd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:16:44.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9sfhd" for this suite. Jun 22 12:16:51.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:16:51.611: INFO: namespace: e2e-tests-subpath-9sfhd, resource: bindings, ignored listing per whitelist Jun 22 12:16:51.645: INFO: namespace e2e-tests-subpath-9sfhd deletion completed in 6.87670781s • [SLOW TEST:46.188 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:16:51.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 22 12:17:00.435: INFO: Successfully updated pod "labelsupdate41059f44-b482-11ea-8cd8-0242ac11001b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:17:02.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q5nzr" for this suite. Jun 22 12:17:25.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:17:25.472: INFO: namespace: e2e-tests-downward-api-q5nzr, resource: bindings, ignored listing per whitelist Jun 22 12:17:25.472: INFO: namespace e2e-tests-downward-api-q5nzr deletion completed in 22.757904013s • [SLOW TEST:33.827 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:17:25.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-6xt8v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-6xt8v to expose endpoints map[] Jun 22 12:17:25.693: INFO: Get endpoints failed (21.331649ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 22 12:17:26.697: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-6xt8v exposes endpoints map[] (1.025533719s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-6xt8v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-6xt8v to expose endpoints map[pod1:[100]] Jun 22 12:17:30.784: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-6xt8v exposes endpoints map[pod1:[100]] (4.079392512s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-6xt8v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-6xt8v to expose endpoints map[pod1:[100] pod2:[101]] Jun 22 12:17:35.007: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-6xt8v exposes endpoints map[pod1:[100] pod2:[101]] (4.219461293s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-6xt8v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-6xt8v to expose endpoints map[pod2:[101]] Jun 22 12:17:36.072: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-6xt8v exposes endpoints map[pod2:[101]] (1.060271286s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-6xt8v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-6xt8v to expose endpoints map[] Jun 22 12:17:37.151: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-6xt8v exposes endpoints map[] (1.074991585s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:17:37.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-6xt8v" for this suite. Jun 22 12:17:46.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:17:46.117: INFO: namespace: e2e-tests-services-6xt8v, resource: bindings, ignored listing per whitelist Jun 22 12:17:46.170: INFO: namespace e2e-tests-services-6xt8v deletion completed in 8.131690163s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:20.698 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:17:46.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 22 12:17:47.298: INFO: Waiting up to 5m0s for pod "pod-61f878ed-b482-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-x54vk" to be "success or failure" Jun 22 12:17:47.324: INFO: Pod "pod-61f878ed-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.468711ms Jun 22 12:17:49.329: INFO: Pod "pod-61f878ed-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030743852s Jun 22 12:17:51.336: INFO: Pod "pod-61f878ed-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038255914s Jun 22 12:17:53.340: INFO: Pod "pod-61f878ed-b482-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041980733s STEP: Saw pod success Jun 22 12:17:53.340: INFO: Pod "pod-61f878ed-b482-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:17:53.343: INFO: Trying to get logs from node hunter-worker2 pod pod-61f878ed-b482-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 12:17:53.378: INFO: Waiting for pod pod-61f878ed-b482-11ea-8cd8-0242ac11001b to disappear Jun 22 12:17:53.394: INFO: Pod pod-61f878ed-b482-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:17:53.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-x54vk" for this suite. Jun 22 12:17:59.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:17:59.469: INFO: namespace: e2e-tests-emptydir-x54vk, resource: bindings, ignored listing per whitelist Jun 22 12:17:59.493: INFO: namespace e2e-tests-emptydir-x54vk deletion completed in 6.095902546s • [SLOW TEST:13.322 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:17:59.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-xx27 STEP: Creating a pod to test atomic-volume-subpath Jun 22 12:17:59.611: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xx27" in namespace "e2e-tests-subpath-kj9xd" to be "success or failure" Jun 22 12:17:59.615: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.745842ms Jun 22 12:18:01.619: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007764383s Jun 22 12:18:03.623: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011956682s Jun 22 12:18:05.907: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295614213s Jun 22 12:18:07.930: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318577015s Jun 22 12:18:09.934: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 10.322661216s Jun 22 12:18:11.939: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 12.328033443s Jun 22 12:18:14.040: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 14.429139371s Jun 22 12:18:16.045: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 16.43411791s Jun 22 12:18:18.595: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 18.984202444s Jun 22 12:18:20.599: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 20.987561495s Jun 22 12:18:22.603: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 22.991369593s Jun 22 12:18:24.606: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 24.995265769s Jun 22 12:18:26.611: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Running", Reason="", readiness=false. Elapsed: 26.999980351s Jun 22 12:18:28.617: INFO: Pod "pod-subpath-test-configmap-xx27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.00540413s STEP: Saw pod success Jun 22 12:18:28.617: INFO: Pod "pod-subpath-test-configmap-xx27" satisfied condition "success or failure" Jun 22 12:18:28.620: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-xx27 container test-container-subpath-configmap-xx27: STEP: delete the pod Jun 22 12:18:28.675: INFO: Waiting for pod pod-subpath-test-configmap-xx27 to disappear Jun 22 12:18:28.682: INFO: Pod pod-subpath-test-configmap-xx27 no longer exists STEP: Deleting pod pod-subpath-test-configmap-xx27 Jun 22 12:18:28.682: INFO: Deleting pod "pod-subpath-test-configmap-xx27" in namespace "e2e-tests-subpath-kj9xd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:18:28.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kj9xd" for this suite. Jun 22 12:18:34.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:18:34.733: INFO: namespace: e2e-tests-subpath-kj9xd, resource: bindings, ignored listing per whitelist Jun 22 12:18:34.824: INFO: namespace e2e-tests-subpath-kj9xd deletion completed in 6.136630581s • [SLOW TEST:35.331 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:18:34.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 22 12:18:35.012: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:35.014: INFO: Number of nodes with available pods: 0 Jun 22 12:18:35.014: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:18:36.019: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:36.022: INFO: Number of nodes with available pods: 0 Jun 22 12:18:36.022: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:18:37.020: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:37.024: INFO: Number of nodes with available pods: 0 Jun 22 12:18:37.024: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:18:38.088: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:38.164: INFO: Number of nodes with available pods: 0 Jun 22 12:18:38.164: INFO: Node hunter-worker is running more than one daemon pod Jun 22 12:18:39.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:39.060: INFO: Number of nodes with available pods: 1 Jun 22 12:18:39.060: INFO: Node hunter-worker2 is running more than one daemon pod Jun 22 12:18:40.020: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:40.024: INFO: Number of nodes with available pods: 2 Jun 22 12:18:40.024: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 22 12:18:40.045: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 12:18:40.063: INFO: Number of nodes with available pods: 2 Jun 22 12:18:40.063: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b8cvc, will wait for the garbage collector to delete the pods Jun 22 12:18:41.171: INFO: Deleting DaemonSet.extensions daemon-set took: 5.263267ms Jun 22 12:18:41.271: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.156383ms Jun 22 12:18:44.595: INFO: Number of nodes with available pods: 0 Jun 22 12:18:44.595: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 12:18:44.598: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b8cvc/daemonsets","resourceVersion":"17298389"},"items":null} Jun 22 12:18:44.601: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b8cvc/pods","resourceVersion":"17298389"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:18:44.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-b8cvc" for this suite. Jun 22 12:18:50.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:18:50.689: INFO: namespace: e2e-tests-daemonsets-b8cvc, resource: bindings, ignored listing per whitelist Jun 22 12:18:50.710: INFO: namespace e2e-tests-daemonsets-b8cvc deletion completed in 6.097537264s • [SLOW TEST:15.885 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:18:50.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 22 12:18:50.848: INFO: Waiting up to 5m0s for pod "downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-cbph9" to be "success or failure" Jun 22 12:18:50.856: INFO: Pod "downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216226ms Jun 22 12:18:52.931: INFO: Pod "downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083736616s Jun 22 12:18:54.935: INFO: Pod "downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087253503s STEP: Saw pod success Jun 22 12:18:54.935: INFO: Pod "downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:18:54.940: INFO: Trying to get logs from node hunter-worker2 pod downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 12:18:55.059: INFO: Waiting for pod downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b to disappear Jun 22 12:18:55.083: INFO: Pod downward-api-87f590a1-b482-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:18:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cbph9" for this suite. Jun 22 12:19:01.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:19:01.154: INFO: namespace: e2e-tests-downward-api-cbph9, resource: bindings, ignored listing per whitelist Jun 22 12:19:01.199: INFO: namespace e2e-tests-downward-api-cbph9 deletion completed in 6.112076393s • [SLOW TEST:10.488 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:19:01.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jun 22 12:19:05.372: INFO: Pod pod-hostip-8e3c2f0b-b482-11ea-8cd8-0242ac11001b has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:19:05.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rkdvt" for this suite. Jun 22 12:19:27.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:19:27.431: INFO: namespace: e2e-tests-pods-rkdvt, resource: bindings, ignored listing per whitelist Jun 22 12:19:27.443: INFO: namespace e2e-tests-pods-rkdvt deletion completed in 22.06861313s • [SLOW TEST:26.244 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:19:27.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:19:27.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r85ss" for this suite. Jun 22 12:19:49.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:19:49.728: INFO: namespace: e2e-tests-pods-r85ss, resource: bindings, ignored listing per whitelist Jun 22 12:19:49.759: INFO: namespace e2e-tests-pods-r85ss deletion completed in 22.135712655s • [SLOW TEST:22.316 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:19:49.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jun 22 12:19:49.859: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix801831849/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:19:49.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hfhf5" for this suite. Jun 22 12:19:55.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:19:56.023: INFO: namespace: e2e-tests-kubectl-hfhf5, resource: bindings, ignored listing per whitelist Jun 22 12:19:56.051: INFO: namespace e2e-tests-kubectl-hfhf5 deletion completed in 6.106146164s • [SLOW TEST:6.292 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:19:56.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:19:56.176: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.952621ms) Jun 22 12:19:56.180: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.021711ms) Jun 22 12:19:56.184: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.147318ms) Jun 22 12:19:56.188: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.903141ms) Jun 22 12:19:56.192: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.09476ms) Jun 22 12:19:56.196: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.98394ms) Jun 22 12:19:56.200: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.852558ms) Jun 22 12:19:56.204: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.83645ms) Jun 22 12:19:56.207: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.581476ms) Jun 22 12:19:56.211: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.548988ms) Jun 22 12:19:56.215: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.767411ms) Jun 22 12:19:56.219: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.848596ms) Jun 22 12:19:56.223: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.308405ms) Jun 22 12:19:56.227: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.515926ms) Jun 22 12:19:56.230: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.327647ms) Jun 22 12:19:56.233: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.128623ms) Jun 22 12:19:56.236: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.027413ms) Jun 22 12:19:56.239: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.983728ms) Jun 22 12:19:56.242: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.847466ms) Jun 22 12:19:56.246: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.562299ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:19:56.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-z6l4d" for this suite. Jun 22 12:20:02.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:20:02.367: INFO: namespace: e2e-tests-proxy-z6l4d, resource: bindings, ignored listing per whitelist Jun 22 12:20:02.411: INFO: namespace e2e-tests-proxy-z6l4d deletion completed in 6.137196501s • [SLOW TEST:6.359 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:20:02.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 22 12:20:02.542: INFO: Waiting up to 5m0s for pod "downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-vnwqb" to be "success or failure" Jun 22 12:20:02.557: INFO: Pod "downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.181652ms Jun 22 12:20:04.624: INFO: Pod "downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081773351s Jun 22 12:20:06.628: INFO: Pod "downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.085733161s Jun 22 12:20:08.631: INFO: Pod "downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088942035s STEP: Saw pod success Jun 22 12:20:08.631: INFO: Pod "downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:20:08.633: INFO: Trying to get logs from node hunter-worker2 pod downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b container dapi-container: STEP: delete the pod Jun 22 12:20:08.655: INFO: Waiting for pod downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b to disappear Jun 22 12:20:08.672: INFO: Pod downward-api-b2b7e38c-b482-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:20:08.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vnwqb" for this suite. Jun 22 12:20:14.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:20:14.784: INFO: namespace: e2e-tests-downward-api-vnwqb, resource: bindings, ignored listing per whitelist Jun 22 12:20:14.795: INFO: namespace e2e-tests-downward-api-vnwqb deletion completed in 6.119949028s • [SLOW TEST:12.384 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:20:14.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 22 12:20:15.038: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ngm6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-ngm6w/configmaps/e2e-watch-test-label-changed,UID:ba1d2620-b482-11ea-99e8-0242ac110002,ResourceVersion:17298704,Generation:0,CreationTimestamp:2020-06-22 12:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 12:20:15.038: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ngm6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-ngm6w/configmaps/e2e-watch-test-label-changed,UID:ba1d2620-b482-11ea-99e8-0242ac110002,ResourceVersion:17298705,Generation:0,CreationTimestamp:2020-06-22 12:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 22 12:20:15.038: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ngm6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-ngm6w/configmaps/e2e-watch-test-label-changed,UID:ba1d2620-b482-11ea-99e8-0242ac110002,ResourceVersion:17298706,Generation:0,CreationTimestamp:2020-06-22 12:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 22 12:20:25.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ngm6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-ngm6w/configmaps/e2e-watch-test-label-changed,UID:ba1d2620-b482-11ea-99e8-0242ac110002,ResourceVersion:17298727,Generation:0,CreationTimestamp:2020-06-22 12:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 12:20:25.085: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ngm6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-ngm6w/configmaps/e2e-watch-test-label-changed,UID:ba1d2620-b482-11ea-99e8-0242ac110002,ResourceVersion:17298728,Generation:0,CreationTimestamp:2020-06-22 12:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 22 12:20:25.085: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ngm6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-ngm6w/configmaps/e2e-watch-test-label-changed,UID:ba1d2620-b482-11ea-99e8-0242ac110002,ResourceVersion:17298729,Generation:0,CreationTimestamp:2020-06-22 12:20:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:20:25.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ngm6w" for this suite. Jun 22 12:20:31.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:20:31.165: INFO: namespace: e2e-tests-watch-ngm6w, resource: bindings, ignored listing per whitelist Jun 22 12:20:31.216: INFO: namespace e2e-tests-watch-ngm6w deletion completed in 6.096991594s • [SLOW TEST:16.420 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:20:31.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:20:31.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-2fl2b" to be "success or failure" Jun 22 12:20:31.335: INFO: Pod "downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.659724ms Jun 22 12:20:33.338: INFO: Pod "downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023334004s Jun 22 12:20:35.342: INFO: Pod "downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026592444s STEP: Saw pod success Jun 22 12:20:35.342: INFO: Pod "downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:20:35.345: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:20:35.362: INFO: Waiting for pod downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b to disappear Jun 22 12:20:35.381: INFO: Pod downwardapi-volume-c3dddeea-b482-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:20:35.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2fl2b" for this suite. Jun 22 12:20:41.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:20:41.435: INFO: namespace: e2e-tests-downward-api-2fl2b, resource: bindings, ignored listing per whitelist Jun 22 12:20:41.498: INFO: namespace e2e-tests-downward-api-2fl2b deletion completed in 6.113104516s • [SLOW TEST:10.282 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:20:41.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jun 22 12:20:41.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:41.969: INFO: stderr: "" Jun 22 12:20:41.969: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 12:20:41.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:42.082: INFO: stderr: "" Jun 22 12:20:42.082: INFO: stdout: "update-demo-nautilus-6hc2q update-demo-nautilus-lm2wg " Jun 22 12:20:42.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hc2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:42.197: INFO: stderr: "" Jun 22 12:20:42.197: INFO: stdout: "" Jun 22 12:20:42.197: INFO: update-demo-nautilus-6hc2q is created but not running Jun 22 12:20:47.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:47.311: INFO: stderr: "" Jun 22 12:20:47.312: INFO: stdout: "update-demo-nautilus-6hc2q update-demo-nautilus-lm2wg " Jun 22 12:20:47.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hc2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:47.413: INFO: stderr: "" Jun 22 12:20:47.413: INFO: stdout: "true" Jun 22 12:20:47.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hc2q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:47.512: INFO: stderr: "" Jun 22 12:20:47.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 12:20:47.512: INFO: validating pod update-demo-nautilus-6hc2q Jun 22 12:20:47.539: INFO: got data: { "image": "nautilus.jpg" } Jun 22 12:20:47.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 12:20:47.539: INFO: update-demo-nautilus-6hc2q is verified up and running Jun 22 12:20:47.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lm2wg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:47.638: INFO: stderr: "" Jun 22 12:20:47.638: INFO: stdout: "true" Jun 22 12:20:47.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lm2wg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:20:47.743: INFO: stderr: "" Jun 22 12:20:47.743: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 12:20:47.743: INFO: validating pod update-demo-nautilus-lm2wg Jun 22 12:20:47.758: INFO: got data: { "image": "nautilus.jpg" } Jun 22 12:20:47.758: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 12:20:47.758: INFO: update-demo-nautilus-lm2wg is verified up and running STEP: rolling-update to new replication controller Jun 22 12:20:47.762: INFO: scanned /root for discovery docs: Jun 22 12:20:47.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:21:10.318: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 22 12:21:10.318: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 12:21:10.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:21:10.417: INFO: stderr: "" Jun 22 12:21:10.417: INFO: stdout: "update-demo-kitten-pl8ld update-demo-kitten-rp92h " Jun 22 12:21:10.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pl8ld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:21:10.519: INFO: stderr: "" Jun 22 12:21:10.519: INFO: stdout: "true" Jun 22 12:21:10.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pl8ld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:21:10.614: INFO: stderr: "" Jun 22 12:21:10.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 22 12:21:10.614: INFO: validating pod update-demo-kitten-pl8ld Jun 22 12:21:10.626: INFO: got data: { "image": "kitten.jpg" } Jun 22 12:21:10.626: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 22 12:21:10.626: INFO: update-demo-kitten-pl8ld is verified up and running Jun 22 12:21:10.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rp92h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:21:10.745: INFO: stderr: "" Jun 22 12:21:10.745: INFO: stdout: "true" Jun 22 12:21:10.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rp92h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rzm5k' Jun 22 12:21:10.868: INFO: stderr: "" Jun 22 12:21:10.868: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 22 12:21:10.868: INFO: validating pod update-demo-kitten-rp92h Jun 22 12:21:10.885: INFO: got data: { "image": "kitten.jpg" } Jun 22 12:21:10.885: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 22 12:21:10.885: INFO: update-demo-kitten-rp92h is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:21:10.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rzm5k" for this suite. Jun 22 12:21:32.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:21:32.990: INFO: namespace: e2e-tests-kubectl-rzm5k, resource: bindings, ignored listing per whitelist Jun 22 12:21:33.043: INFO: namespace e2e-tests-kubectl-rzm5k deletion completed in 22.1089306s • [SLOW TEST:51.545 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:21:33.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:21:37.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-wrm8s" for this suite. Jun 22 12:22:23.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:22:23.244: INFO: namespace: e2e-tests-kubelet-test-wrm8s, resource: bindings, ignored listing per whitelist Jun 22 12:22:23.293: INFO: namespace e2e-tests-kubelet-test-wrm8s deletion completed in 46.11328236s • [SLOW TEST:50.249 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:22:23.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:22:23.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-downward-api-mqbxv" to be "success or failure" Jun 22 12:22:23.434: INFO: Pod "downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411954ms Jun 22 12:22:25.439: INFO: Pod "downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833938s Jun 22 12:22:27.442: INFO: Pod "downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011815642s STEP: Saw pod success Jun 22 12:22:27.443: INFO: Pod "downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:22:27.445: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:22:27.514: INFO: Waiting for pod downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b to disappear Jun 22 12:22:27.563: INFO: Pod downwardapi-volume-06ae404b-b483-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:22:27.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mqbxv" for this suite. Jun 22 12:22:33.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:22:33.643: INFO: namespace: e2e-tests-downward-api-mqbxv, resource: bindings, ignored listing per whitelist Jun 22 12:22:33.686: INFO: namespace e2e-tests-downward-api-mqbxv deletion completed in 6.113949155s • [SLOW TEST:10.393 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:22:33.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rccf4 in namespace e2e-tests-proxy-5skfs I0622 12:22:33.898962 7 runners.go:184] Created replication controller with name: proxy-service-rccf4, namespace: e2e-tests-proxy-5skfs, replica count: 1 I0622 12:22:34.949471 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 12:22:35.949654 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 12:22:36.949948 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 12:22:37.950157 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:38.950409 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:39.950685 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:40.950922 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:41.951410 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:42.951641 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:43.951844 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:44.952034 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:45.952222 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 12:22:46.952437 7 runners.go:184] proxy-service-rccf4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 12:22:46.955: INFO: setup took 13.161751322s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 22 12:22:46.960: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5skfs/pods/http:proxy-service-rccf4-27pmx:160/proxy/: foo (200; 4.576152ms) Jun 22 12:22:46.960: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5skfs/pods/proxy-service-rccf4-27pmx:160/proxy/: foo (200; 4.220819ms) Jun 22 12:22:46.960: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5skfs/pods/proxy-service-rccf4-27pmx/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vsfws [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jun 22 12:22:56.380: INFO: Found 0 stateful pods, waiting for 3 Jun 22 12:23:06.386: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 12:23:06.386: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 12:23:06.386: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 22 12:23:16.386: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 12:23:16.386: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 12:23:16.386: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 22 12:23:16.412: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 22 12:23:26.496: INFO: Updating stateful set ss2 Jun 22 12:23:26.505: INFO: Waiting for Pod e2e-tests-statefulset-vsfws/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 22 12:23:36.631: INFO: Found 2 stateful pods, waiting for 3 Jun 22 12:23:46.637: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 12:23:46.637: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 12:23:46.637: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 22 12:23:46.663: INFO: Updating stateful set ss2 Jun 22 12:23:46.672: INFO: Waiting for Pod e2e-tests-statefulset-vsfws/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 12:23:56.698: INFO: Updating stateful set ss2 Jun 22 12:23:56.712: INFO: Waiting for StatefulSet e2e-tests-statefulset-vsfws/ss2 to complete update Jun 22 12:23:56.712: INFO: Waiting for Pod e2e-tests-statefulset-vsfws/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 22 12:24:06.721: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vsfws Jun 22 12:24:06.724: INFO: Scaling statefulset ss2 to 0 Jun 22 12:24:26.753: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 12:24:26.756: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:24:26.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vsfws" for this suite. Jun 22 12:24:32.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:24:32.801: INFO: namespace: e2e-tests-statefulset-vsfws, resource: bindings, ignored listing per whitelist Jun 22 12:24:32.875: INFO: namespace e2e-tests-statefulset-vsfws deletion completed in 6.100192632s • [SLOW TEST:96.604 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:24:32.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 12:24:33.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ntj5g' Jun 22 12:24:35.689: INFO: stderr: "" Jun 22 12:24:35.689: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 22 12:24:40.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ntj5g -o json' Jun 22 12:24:40.837: INFO: stderr: "" Jun 22 12:24:40.837: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-22T12:24:35Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-ntj5g\",\n \"resourceVersion\": \"17299733\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-ntj5g/pods/e2e-test-nginx-pod\",\n \"uid\": \"5584e6c7-b483-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cpxzn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cpxzn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cpxzn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T12:24:35Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T12:24:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T12:24:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T12:24:35Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b424e1428eccb99b3794df3964a5b07e1280cbe6d2394803fb907f07cf3affaf\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-22T12:24:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.199\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-22T12:24:35Z\"\n }\n}\n" STEP: replace the image in the pod Jun 22 12:24:40.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-ntj5g' Jun 22 12:24:41.081: INFO: stderr: "" Jun 22 12:24:41.081: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jun 22 12:24:41.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ntj5g' Jun 22 12:24:44.399: INFO: stderr: "" Jun 22 12:24:44.399: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:24:44.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ntj5g" for this suite. Jun 22 12:24:50.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:24:50.447: INFO: namespace: e2e-tests-kubectl-ntj5g, resource: bindings, ignored listing per whitelist Jun 22 12:24:50.503: INFO: namespace e2e-tests-kubectl-ntj5g deletion completed in 6.09952291s • [SLOW TEST:17.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:24:50.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:24:50.685: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 22 12:24:55.690: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 12:24:55.690: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 22 12:24:55.708: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-grmr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-grmr9/deployments/test-cleanup-deployment,UID:6174a22a-b483-11ea-99e8-0242ac110002,ResourceVersion:17299804,Generation:1,CreationTimestamp:2020-06-22 12:24:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 22 12:24:55.730: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jun 22 12:24:55.731: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 22 12:24:55.731: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-grmr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-grmr9/replicasets/test-cleanup-controller,UID:5e752678-b483-11ea-99e8-0242ac110002,ResourceVersion:17299805,Generation:1,CreationTimestamp:2020-06-22 12:24:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6174a22a-b483-11ea-99e8-0242ac110002 0xc001e287b7 0xc001e287b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 22 12:24:55.740: INFO: Pod "test-cleanup-controller-vv46x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vv46x,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-grmr9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-grmr9/pods/test-cleanup-controller-vv46x,UID:5e78a18c-b483-11ea-99e8-0242ac110002,ResourceVersion:17299797,Generation:0,CreationTimestamp:2020-06-22 12:24:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 5e752678-b483-11ea-99e8-0242ac110002 0xc001ee9117 0xc001ee9118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2vw5m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2vw5m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2vw5m true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ee9190} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ee92c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:24:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:24:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:24:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:24:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.200,StartTime:2020-06-22 12:24:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 12:24:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0ee8ee6e4b7130abefce3133cd0370319cb04ec187c78658fbb6af5ff7d07a82}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:24:55.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-grmr9" for this suite. Jun 22 12:25:01.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:25:01.895: INFO: namespace: e2e-tests-deployment-grmr9, resource: bindings, ignored listing per whitelist Jun 22 12:25:01.987: INFO: namespace e2e-tests-deployment-grmr9 deletion completed in 6.210855592s • [SLOW TEST:11.484 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:25:01.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:25:02.149: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jun 22 12:25:02.154: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qxt4l/daemonsets","resourceVersion":"17299862"},"items":null} Jun 22 12:25:02.155: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qxt4l/pods","resourceVersion":"17299862"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:25:02.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-qxt4l" for this suite. Jun 22 12:25:08.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:25:08.219: INFO: namespace: e2e-tests-daemonsets-qxt4l, resource: bindings, ignored listing per whitelist Jun 22 12:25:08.258: INFO: namespace e2e-tests-daemonsets-qxt4l deletion completed in 6.093194194s S [SKIPPING] [6.271 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 22 12:25:02.149: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:25:08.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 22 12:25:08.401: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-g4k5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4k5g/configmaps/e2e-watch-test-watch-closed,UID:68fd087a-b483-11ea-99e8-0242ac110002,ResourceVersion:17299884,Generation:0,CreationTimestamp:2020-06-22 12:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 12:25:08.402: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-g4k5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4k5g/configmaps/e2e-watch-test-watch-closed,UID:68fd087a-b483-11ea-99e8-0242ac110002,ResourceVersion:17299885,Generation:0,CreationTimestamp:2020-06-22 12:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 22 12:25:08.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-g4k5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4k5g/configmaps/e2e-watch-test-watch-closed,UID:68fd087a-b483-11ea-99e8-0242ac110002,ResourceVersion:17299886,Generation:0,CreationTimestamp:2020-06-22 12:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 12:25:08.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-g4k5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4k5g/configmaps/e2e-watch-test-watch-closed,UID:68fd087a-b483-11ea-99e8-0242ac110002,ResourceVersion:17299887,Generation:0,CreationTimestamp:2020-06-22 12:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:25:08.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-g4k5g" for this suite. Jun 22 12:25:14.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:25:14.471: INFO: namespace: e2e-tests-watch-g4k5g, resource: bindings, ignored listing per whitelist Jun 22 12:25:14.510: INFO: namespace e2e-tests-watch-g4k5g deletion completed in 6.086508388s • [SLOW TEST:6.251 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:25:14.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 22 12:25:19.156: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6cbd3190-b483-11ea-8cd8-0242ac11001b" Jun 22 12:25:19.156: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6cbd3190-b483-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-pods-rthjc" to be "terminated due to deadline exceeded" Jun 22 12:25:19.160: INFO: Pod "pod-update-activedeadlineseconds-6cbd3190-b483-11ea-8cd8-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 3.370799ms Jun 22 12:25:21.164: INFO: Pod "pod-update-activedeadlineseconds-6cbd3190-b483-11ea-8cd8-0242ac11001b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007951211s Jun 22 12:25:21.164: INFO: Pod "pod-update-activedeadlineseconds-6cbd3190-b483-11ea-8cd8-0242ac11001b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:25:21.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rthjc" for this suite. Jun 22 12:25:27.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:25:27.242: INFO: namespace: e2e-tests-pods-rthjc, resource: bindings, ignored listing per whitelist Jun 22 12:25:27.259: INFO: namespace e2e-tests-pods-rthjc deletion completed in 6.089506351s • [SLOW TEST:12.749 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:25:27.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 22 12:25:27.349: INFO: namespace e2e-tests-kubectl-6hxlb Jun 22 12:25:27.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6hxlb' Jun 22 12:25:27.579: INFO: stderr: "" Jun 22 12:25:27.579: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 22 12:25:28.584: INFO: Selector matched 1 pods for map[app:redis] Jun 22 12:25:28.584: INFO: Found 0 / 1 Jun 22 12:25:29.584: INFO: Selector matched 1 pods for map[app:redis] Jun 22 12:25:29.584: INFO: Found 0 / 1 Jun 22 12:25:30.584: INFO: Selector matched 1 pods for map[app:redis] Jun 22 12:25:30.584: INFO: Found 1 / 1 Jun 22 12:25:30.584: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 12:25:30.588: INFO: Selector matched 1 pods for map[app:redis] Jun 22 12:25:30.588: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 12:25:30.588: INFO: wait on redis-master startup in e2e-tests-kubectl-6hxlb Jun 22 12:25:30.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nkmt9 redis-master --namespace=e2e-tests-kubectl-6hxlb' Jun 22 12:25:30.704: INFO: stderr: "" Jun 22 12:25:30.704: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jun 12:25:30.409 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jun 12:25:30.410 # Server started, Redis version 3.2.12\n1:M 22 Jun 12:25:30.410 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jun 12:25:30.410 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 22 12:25:30.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-6hxlb' Jun 22 12:25:30.863: INFO: stderr: "" Jun 22 12:25:30.863: INFO: stdout: "service/rm2 exposed\n" Jun 22 12:25:30.886: INFO: Service rm2 in namespace e2e-tests-kubectl-6hxlb found. STEP: exposing service Jun 22 12:25:32.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-6hxlb' Jun 22 12:25:33.076: INFO: stderr: "" Jun 22 12:25:33.076: INFO: stdout: "service/rm3 exposed\n" Jun 22 12:25:33.083: INFO: Service rm3 in namespace e2e-tests-kubectl-6hxlb found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:25:35.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6hxlb" for this suite. Jun 22 12:25:59.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:25:59.135: INFO: namespace: e2e-tests-kubectl-6hxlb, resource: bindings, ignored listing per whitelist Jun 22 12:25:59.179: INFO: namespace e2e-tests-kubectl-6hxlb deletion completed in 24.084786846s • [SLOW TEST:31.920 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:25:59.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:25:59.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-srp4s" for this suite. Jun 22 12:26:05.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:26:05.395: INFO: namespace: e2e-tests-services-srp4s, resource: bindings, ignored listing per whitelist Jun 22 12:26:05.470: INFO: namespace e2e-tests-services-srp4s deletion completed in 6.15659993s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.291 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:26:05.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:26:35.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-4k8vv" for this suite. Jun 22 12:26:41.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:26:41.200: INFO: namespace: e2e-tests-container-runtime-4k8vv, resource: bindings, ignored listing per whitelist Jun 22 12:26:41.226: INFO: namespace e2e-tests-container-runtime-4k8vv deletion completed in 6.090616562s • [SLOW TEST:35.756 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:26:41.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 22 12:26:41.389: INFO: Waiting up to 5m0s for pod "pod-a071a823-b483-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-56wk7" to be "success or failure" Jun 22 12:26:41.394: INFO: Pod "pod-a071a823-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259293ms Jun 22 12:26:43.398: INFO: Pod "pod-a071a823-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00846639s Jun 22 12:26:45.402: INFO: Pod "pod-a071a823-b483-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012675577s STEP: Saw pod success Jun 22 12:26:45.402: INFO: Pod "pod-a071a823-b483-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:26:45.404: INFO: Trying to get logs from node hunter-worker2 pod pod-a071a823-b483-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 12:26:45.431: INFO: Waiting for pod pod-a071a823-b483-11ea-8cd8-0242ac11001b to disappear Jun 22 12:26:45.502: INFO: Pod pod-a071a823-b483-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:26:45.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-56wk7" for this suite. Jun 22 12:26:51.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:26:51.570: INFO: namespace: e2e-tests-emptydir-56wk7, resource: bindings, ignored listing per whitelist Jun 22 12:26:51.621: INFO: namespace e2e-tests-emptydir-56wk7 deletion completed in 6.114671794s • [SLOW TEST:10.394 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:26:51.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 22 12:26:51.786: INFO: Waiting up to 5m0s for pod "pod-a69f9859-b483-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-6mc7k" to be "success or failure" Jun 22 12:26:51.815: INFO: Pod "pod-a69f9859-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.829241ms Jun 22 12:26:53.818: INFO: Pod "pod-a69f9859-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032297161s Jun 22 12:26:55.823: INFO: Pod "pod-a69f9859-b483-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036901908s STEP: Saw pod success Jun 22 12:26:55.823: INFO: Pod "pod-a69f9859-b483-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:26:55.826: INFO: Trying to get logs from node hunter-worker2 pod pod-a69f9859-b483-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 12:26:56.008: INFO: Waiting for pod pod-a69f9859-b483-11ea-8cd8-0242ac11001b to disappear Jun 22 12:26:56.054: INFO: Pod pod-a69f9859-b483-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:26:56.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6mc7k" for this suite. Jun 22 12:27:02.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:27:02.228: INFO: namespace: e2e-tests-emptydir-6mc7k, resource: bindings, ignored listing per whitelist Jun 22 12:27:02.258: INFO: namespace e2e-tests-emptydir-6mc7k deletion completed in 6.20004128s • [SLOW TEST:10.637 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:27:02.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-acf1456e-b483-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:27:02.360: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-pwh2w" to be "success or failure" Jun 22 12:27:02.364: INFO: Pod "pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.390495ms Jun 22 12:27:04.368: INFO: Pod "pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007819121s Jun 22 12:27:06.373: INFO: Pod "pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012442381s STEP: Saw pod success Jun 22 12:27:06.373: INFO: Pod "pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:27:06.375: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 22 12:27:06.410: INFO: Waiting for pod pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b to disappear Jun 22 12:27:06.441: INFO: Pod pod-projected-secrets-acf2b071-b483-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:27:06.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pwh2w" for this suite. Jun 22 12:27:12.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:27:12.601: INFO: namespace: e2e-tests-projected-pwh2w, resource: bindings, ignored listing per whitelist Jun 22 12:27:12.613: INFO: namespace e2e-tests-projected-pwh2w deletion completed in 6.169074135s • [SLOW TEST:10.355 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:27:12.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 22 12:27:16.794: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b322e352-b483-11ea-8cd8-0242ac11001b,GenerateName:,Namespace:e2e-tests-events-qz5xs,SelfLink:/api/v1/namespaces/e2e-tests-events-qz5xs/pods/send-events-b322e352-b483-11ea-8cd8-0242ac11001b,UID:b324ae7c-b483-11ea-99e8-0242ac110002,ResourceVersion:17300385,Generation:0,CreationTimestamp:2020-06-22 12:27:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 734503599,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rc2kl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rc2kl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-rc2kl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026d6df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026d6e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:27:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:27:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:27:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 12:27:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.169,StartTime:2020-06-22 12:27:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-22 12:27:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://4e7f5377d9532c515a30267b505f6c5409e55295f4f6fb00d7afb973faa10e21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 22 12:27:18.799: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 22 12:27:20.804: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:27:20.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-qz5xs" for this suite. Jun 22 12:27:58.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:27:58.946: INFO: namespace: e2e-tests-events-qz5xs, resource: bindings, ignored listing per whitelist Jun 22 12:27:58.948: INFO: namespace e2e-tests-events-qz5xs deletion completed in 38.132299624s • [SLOW TEST:46.335 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:27:58.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 22 12:28:09.116: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.116: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.152060 7 log.go:172] (0xc001978370) (0xc001ea1ea0) Create stream I0622 12:28:09.152089 7 log.go:172] (0xc001978370) (0xc001ea1ea0) Stream added, broadcasting: 1 I0622 12:28:09.154641 7 log.go:172] (0xc001978370) Reply frame received for 1 I0622 12:28:09.154673 7 log.go:172] (0xc001978370) (0xc001a43e00) Create stream I0622 12:28:09.154684 7 log.go:172] (0xc001978370) (0xc001a43e00) Stream added, broadcasting: 3 I0622 12:28:09.155792 7 log.go:172] (0xc001978370) Reply frame received for 3 I0622 12:28:09.155930 7 log.go:172] (0xc001978370) (0xc00213a1e0) Create stream I0622 12:28:09.155970 7 log.go:172] (0xc001978370) (0xc00213a1e0) Stream added, broadcasting: 5 I0622 12:28:09.156966 7 log.go:172] (0xc001978370) Reply frame received for 5 I0622 12:28:09.246238 7 log.go:172] (0xc001978370) Data frame received for 5 I0622 12:28:09.246271 7 log.go:172] (0xc00213a1e0) (5) Data frame handling I0622 12:28:09.246299 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 12:28:09.246309 7 log.go:172] (0xc001a43e00) (3) Data frame handling I0622 12:28:09.246320 7 log.go:172] (0xc001a43e00) (3) Data frame sent I0622 12:28:09.246333 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 12:28:09.246345 7 log.go:172] (0xc001a43e00) (3) Data frame handling I0622 12:28:09.249636 7 log.go:172] (0xc001978370) Data frame received for 1 I0622 12:28:09.249652 7 log.go:172] (0xc001ea1ea0) (1) Data frame handling I0622 12:28:09.249659 7 log.go:172] (0xc001ea1ea0) (1) Data frame sent I0622 12:28:09.249672 7 log.go:172] (0xc001978370) (0xc001ea1ea0) Stream removed, broadcasting: 1 I0622 12:28:09.249680 7 log.go:172] (0xc001978370) Go away received I0622 12:28:09.249772 7 log.go:172] (0xc001978370) (0xc001ea1ea0) Stream removed, broadcasting: 1 I0622 12:28:09.249794 7 log.go:172] (0xc001978370) (0xc001a43e00) Stream removed, broadcasting: 3 I0622 12:28:09.249805 7 log.go:172] (0xc001978370) (0xc00213a1e0) Stream removed, broadcasting: 5 Jun 22 12:28:09.249: INFO: Exec stderr: "" Jun 22 12:28:09.249: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.249: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.277278 7 log.go:172] (0xc000e12790) (0xc000ba4280) Create stream I0622 12:28:09.277302 7 log.go:172] (0xc000e12790) (0xc000ba4280) Stream added, broadcasting: 1 I0622 12:28:09.279056 7 log.go:172] (0xc000e12790) Reply frame received for 1 I0622 12:28:09.279092 7 log.go:172] (0xc000e12790) (0xc000b8fae0) Create stream I0622 12:28:09.279105 7 log.go:172] (0xc000e12790) (0xc000b8fae0) Stream added, broadcasting: 3 I0622 12:28:09.279853 7 log.go:172] (0xc000e12790) Reply frame received for 3 I0622 12:28:09.279886 7 log.go:172] (0xc000e12790) (0xc001ea1f40) Create stream I0622 12:28:09.279899 7 log.go:172] (0xc000e12790) (0xc001ea1f40) Stream added, broadcasting: 5 I0622 12:28:09.280787 7 log.go:172] (0xc000e12790) Reply frame received for 5 I0622 12:28:09.346576 7 log.go:172] (0xc000e12790) Data frame received for 5 I0622 12:28:09.346650 7 log.go:172] (0xc001ea1f40) (5) Data frame handling I0622 12:28:09.346698 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 12:28:09.346776 7 log.go:172] (0xc000b8fae0) (3) Data frame handling I0622 12:28:09.346839 7 log.go:172] (0xc000b8fae0) (3) Data frame sent I0622 12:28:09.346857 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 12:28:09.346873 7 log.go:172] (0xc000b8fae0) (3) Data frame handling I0622 12:28:09.347843 7 log.go:172] (0xc000e12790) Data frame received for 1 I0622 12:28:09.347864 7 log.go:172] (0xc000ba4280) (1) Data frame handling I0622 12:28:09.347882 7 log.go:172] (0xc000ba4280) (1) Data frame sent I0622 12:28:09.347929 7 log.go:172] (0xc000e12790) (0xc000ba4280) Stream removed, broadcasting: 1 I0622 12:28:09.348073 7 log.go:172] (0xc000e12790) Go away received I0622 12:28:09.348112 7 log.go:172] (0xc000e12790) (0xc000ba4280) Stream removed, broadcasting: 1 I0622 12:28:09.348140 7 log.go:172] (0xc000e12790) (0xc000b8fae0) Stream removed, broadcasting: 3 I0622 12:28:09.348163 7 log.go:172] (0xc000e12790) (0xc001ea1f40) Stream removed, broadcasting: 5 Jun 22 12:28:09.348: INFO: Exec stderr: "" Jun 22 12:28:09.348: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.348: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.375797 7 log.go:172] (0xc000938d10) (0xc000b8fcc0) Create stream I0622 12:28:09.375818 7 log.go:172] (0xc000938d10) (0xc000b8fcc0) Stream added, broadcasting: 1 I0622 12:28:09.378131 7 log.go:172] (0xc000938d10) Reply frame received for 1 I0622 12:28:09.378164 7 log.go:172] (0xc000938d10) (0xc000b8fd60) Create stream I0622 12:28:09.378185 7 log.go:172] (0xc000938d10) (0xc000b8fd60) Stream added, broadcasting: 3 I0622 12:28:09.378845 7 log.go:172] (0xc000938d10) Reply frame received for 3 I0622 12:28:09.378874 7 log.go:172] (0xc000938d10) (0xc000ba43c0) Create stream I0622 12:28:09.378884 7 log.go:172] (0xc000938d10) (0xc000ba43c0) Stream added, broadcasting: 5 I0622 12:28:09.379594 7 log.go:172] (0xc000938d10) Reply frame received for 5 I0622 12:28:09.428259 7 log.go:172] (0xc000938d10) Data frame received for 3 I0622 12:28:09.428294 7 log.go:172] (0xc000b8fd60) (3) Data frame handling I0622 12:28:09.428304 7 log.go:172] (0xc000b8fd60) (3) Data frame sent I0622 12:28:09.428336 7 log.go:172] (0xc000938d10) Data frame received for 5 I0622 12:28:09.428354 7 log.go:172] (0xc000ba43c0) (5) Data frame handling I0622 12:28:09.428385 7 log.go:172] (0xc000938d10) Data frame received for 3 I0622 12:28:09.428421 7 log.go:172] (0xc000b8fd60) (3) Data frame handling I0622 12:28:09.429655 7 log.go:172] (0xc000938d10) Data frame received for 1 I0622 12:28:09.429673 7 log.go:172] (0xc000b8fcc0) (1) Data frame handling I0622 12:28:09.429694 7 log.go:172] (0xc000b8fcc0) (1) Data frame sent I0622 12:28:09.429711 7 log.go:172] (0xc000938d10) (0xc000b8fcc0) Stream removed, broadcasting: 1 I0622 12:28:09.429726 7 log.go:172] (0xc000938d10) Go away received I0622 12:28:09.429823 7 log.go:172] (0xc000938d10) (0xc000b8fcc0) Stream removed, broadcasting: 1 I0622 12:28:09.429837 7 log.go:172] (0xc000938d10) (0xc000b8fd60) Stream removed, broadcasting: 3 I0622 12:28:09.429846 7 log.go:172] (0xc000938d10) (0xc000ba43c0) Stream removed, broadcasting: 5 Jun 22 12:28:09.429: INFO: Exec stderr: "" Jun 22 12:28:09.429: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.429: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.460946 7 log.go:172] (0xc0006382c0) (0xc0022514a0) Create stream I0622 12:28:09.460992 7 log.go:172] (0xc0006382c0) (0xc0022514a0) Stream added, broadcasting: 1 I0622 12:28:09.463477 7 log.go:172] (0xc0006382c0) Reply frame received for 1 I0622 12:28:09.463525 7 log.go:172] (0xc0006382c0) (0xc00213a280) Create stream I0622 12:28:09.463539 7 log.go:172] (0xc0006382c0) (0xc00213a280) Stream added, broadcasting: 3 I0622 12:28:09.464444 7 log.go:172] (0xc0006382c0) Reply frame received for 3 I0622 12:28:09.464489 7 log.go:172] (0xc0006382c0) (0xc00213a320) Create stream I0622 12:28:09.464503 7 log.go:172] (0xc0006382c0) (0xc00213a320) Stream added, broadcasting: 5 I0622 12:28:09.465718 7 log.go:172] (0xc0006382c0) Reply frame received for 5 I0622 12:28:09.546599 7 log.go:172] (0xc0006382c0) Data frame received for 5 I0622 12:28:09.546634 7 log.go:172] (0xc00213a320) (5) Data frame handling I0622 12:28:09.546676 7 log.go:172] (0xc0006382c0) Data frame received for 3 I0622 12:28:09.546721 7 log.go:172] (0xc00213a280) (3) Data frame handling I0622 12:28:09.546745 7 log.go:172] (0xc00213a280) (3) Data frame sent I0622 12:28:09.546765 7 log.go:172] (0xc0006382c0) Data frame received for 3 I0622 12:28:09.546773 7 log.go:172] (0xc00213a280) (3) Data frame handling I0622 12:28:09.547984 7 log.go:172] (0xc0006382c0) Data frame received for 1 I0622 12:28:09.548005 7 log.go:172] (0xc0022514a0) (1) Data frame handling I0622 12:28:09.548031 7 log.go:172] (0xc0022514a0) (1) Data frame sent I0622 12:28:09.548409 7 log.go:172] (0xc0006382c0) (0xc0022514a0) Stream removed, broadcasting: 1 I0622 12:28:09.548441 7 log.go:172] (0xc0006382c0) Go away received I0622 12:28:09.548551 7 log.go:172] (0xc0006382c0) (0xc0022514a0) Stream removed, broadcasting: 1 I0622 12:28:09.548584 7 log.go:172] (0xc0006382c0) (0xc00213a280) Stream removed, broadcasting: 3 I0622 12:28:09.548608 7 log.go:172] (0xc0006382c0) (0xc00213a320) Stream removed, broadcasting: 5 Jun 22 12:28:09.548: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 22 12:28:09.548: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.548: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.582618 7 log.go:172] (0xc0009391e0) (0xc001838280) Create stream I0622 12:28:09.582670 7 log.go:172] (0xc0009391e0) (0xc001838280) Stream added, broadcasting: 1 I0622 12:28:09.586127 7 log.go:172] (0xc0009391e0) Reply frame received for 1 I0622 12:28:09.586174 7 log.go:172] (0xc0009391e0) (0xc002251540) Create stream I0622 12:28:09.586191 7 log.go:172] (0xc0009391e0) (0xc002251540) Stream added, broadcasting: 3 I0622 12:28:09.587485 7 log.go:172] (0xc0009391e0) Reply frame received for 3 I0622 12:28:09.587541 7 log.go:172] (0xc0009391e0) (0xc000ba4500) Create stream I0622 12:28:09.587576 7 log.go:172] (0xc0009391e0) (0xc000ba4500) Stream added, broadcasting: 5 I0622 12:28:09.589926 7 log.go:172] (0xc0009391e0) Reply frame received for 5 I0622 12:28:09.648543 7 log.go:172] (0xc0009391e0) Data frame received for 3 I0622 12:28:09.648586 7 log.go:172] (0xc002251540) (3) Data frame handling I0622 12:28:09.648608 7 log.go:172] (0xc002251540) (3) Data frame sent I0622 12:28:09.648632 7 log.go:172] (0xc0009391e0) Data frame received for 3 I0622 12:28:09.648644 7 log.go:172] (0xc002251540) (3) Data frame handling I0622 12:28:09.648658 7 log.go:172] (0xc0009391e0) Data frame received for 5 I0622 12:28:09.648682 7 log.go:172] (0xc000ba4500) (5) Data frame handling I0622 12:28:09.650598 7 log.go:172] (0xc0009391e0) Data frame received for 1 I0622 12:28:09.650654 7 log.go:172] (0xc001838280) (1) Data frame handling I0622 12:28:09.650689 7 log.go:172] (0xc001838280) (1) Data frame sent I0622 12:28:09.650711 7 log.go:172] (0xc0009391e0) (0xc001838280) Stream removed, broadcasting: 1 I0622 12:28:09.650730 7 log.go:172] (0xc0009391e0) Go away received I0622 12:28:09.650799 7 log.go:172] (0xc0009391e0) (0xc001838280) Stream removed, broadcasting: 1 I0622 12:28:09.650819 7 log.go:172] (0xc0009391e0) (0xc002251540) Stream removed, broadcasting: 3 I0622 12:28:09.650826 7 log.go:172] (0xc0009391e0) (0xc000ba4500) Stream removed, broadcasting: 5 Jun 22 12:28:09.650: INFO: Exec stderr: "" Jun 22 12:28:09.650: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.650: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.679371 7 log.go:172] (0xc000638790) (0xc002251860) Create stream I0622 12:28:09.679396 7 log.go:172] (0xc000638790) (0xc002251860) Stream added, broadcasting: 1 I0622 12:28:09.682042 7 log.go:172] (0xc000638790) Reply frame received for 1 I0622 12:28:09.682093 7 log.go:172] (0xc000638790) (0xc00213a3c0) Create stream I0622 12:28:09.682107 7 log.go:172] (0xc000638790) (0xc00213a3c0) Stream added, broadcasting: 3 I0622 12:28:09.682978 7 log.go:172] (0xc000638790) Reply frame received for 3 I0622 12:28:09.683018 7 log.go:172] (0xc000638790) (0xc0018383c0) Create stream I0622 12:28:09.683034 7 log.go:172] (0xc000638790) (0xc0018383c0) Stream added, broadcasting: 5 I0622 12:28:09.683825 7 log.go:172] (0xc000638790) Reply frame received for 5 I0622 12:28:09.751511 7 log.go:172] (0xc000638790) Data frame received for 5 I0622 12:28:09.751554 7 log.go:172] (0xc000638790) Data frame received for 3 I0622 12:28:09.751602 7 log.go:172] (0xc00213a3c0) (3) Data frame handling I0622 12:28:09.751634 7 log.go:172] (0xc00213a3c0) (3) Data frame sent I0622 12:28:09.751656 7 log.go:172] (0xc000638790) Data frame received for 3 I0622 12:28:09.751675 7 log.go:172] (0xc00213a3c0) (3) Data frame handling I0622 12:28:09.751711 7 log.go:172] (0xc0018383c0) (5) Data frame handling I0622 12:28:09.752736 7 log.go:172] (0xc000638790) Data frame received for 1 I0622 12:28:09.752756 7 log.go:172] (0xc002251860) (1) Data frame handling I0622 12:28:09.752789 7 log.go:172] (0xc002251860) (1) Data frame sent I0622 12:28:09.752813 7 log.go:172] (0xc000638790) (0xc002251860) Stream removed, broadcasting: 1 I0622 12:28:09.752932 7 log.go:172] (0xc000638790) (0xc002251860) Stream removed, broadcasting: 1 I0622 12:28:09.752959 7 log.go:172] (0xc000638790) (0xc00213a3c0) Stream removed, broadcasting: 3 I0622 12:28:09.753063 7 log.go:172] (0xc000638790) Go away received I0622 12:28:09.753370 7 log.go:172] (0xc000638790) (0xc0018383c0) Stream removed, broadcasting: 5 Jun 22 12:28:09.753: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 22 12:28:09.753: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.753: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.779472 7 log.go:172] (0xc0009396b0) (0xc0018388c0) Create stream I0622 12:28:09.779518 7 log.go:172] (0xc0009396b0) (0xc0018388c0) Stream added, broadcasting: 1 I0622 12:28:09.781883 7 log.go:172] (0xc0009396b0) Reply frame received for 1 I0622 12:28:09.781914 7 log.go:172] (0xc0009396b0) (0xc001838aa0) Create stream I0622 12:28:09.781923 7 log.go:172] (0xc0009396b0) (0xc001838aa0) Stream added, broadcasting: 3 I0622 12:28:09.782836 7 log.go:172] (0xc0009396b0) Reply frame received for 3 I0622 12:28:09.782867 7 log.go:172] (0xc0009396b0) (0xc001838be0) Create stream I0622 12:28:09.782879 7 log.go:172] (0xc0009396b0) (0xc001838be0) Stream added, broadcasting: 5 I0622 12:28:09.783670 7 log.go:172] (0xc0009396b0) Reply frame received for 5 I0622 12:28:09.852827 7 log.go:172] (0xc0009396b0) Data frame received for 5 I0622 12:28:09.852862 7 log.go:172] (0xc001838be0) (5) Data frame handling I0622 12:28:09.852881 7 log.go:172] (0xc0009396b0) Data frame received for 3 I0622 12:28:09.852890 7 log.go:172] (0xc001838aa0) (3) Data frame handling I0622 12:28:09.852898 7 log.go:172] (0xc001838aa0) (3) Data frame sent I0622 12:28:09.852906 7 log.go:172] (0xc0009396b0) Data frame received for 3 I0622 12:28:09.852911 7 log.go:172] (0xc001838aa0) (3) Data frame handling I0622 12:28:09.854837 7 log.go:172] (0xc0009396b0) Data frame received for 1 I0622 12:28:09.854855 7 log.go:172] (0xc0018388c0) (1) Data frame handling I0622 12:28:09.854874 7 log.go:172] (0xc0018388c0) (1) Data frame sent I0622 12:28:09.854898 7 log.go:172] (0xc0009396b0) (0xc0018388c0) Stream removed, broadcasting: 1 I0622 12:28:09.854954 7 log.go:172] (0xc0009396b0) Go away received I0622 12:28:09.855007 7 log.go:172] (0xc0009396b0) (0xc0018388c0) Stream removed, broadcasting: 1 I0622 12:28:09.855025 7 log.go:172] (0xc0009396b0) (0xc001838aa0) Stream removed, broadcasting: 3 I0622 12:28:09.855038 7 log.go:172] (0xc0009396b0) (0xc001838be0) Stream removed, broadcasting: 5 Jun 22 12:28:09.855: INFO: Exec stderr: "" Jun 22 12:28:09.855: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.855: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.887303 7 log.go:172] (0xc000e406e0) (0xc00213a640) Create stream I0622 12:28:09.887336 7 log.go:172] (0xc000e406e0) (0xc00213a640) Stream added, broadcasting: 1 I0622 12:28:09.890311 7 log.go:172] (0xc000e406e0) Reply frame received for 1 I0622 12:28:09.890338 7 log.go:172] (0xc000e406e0) (0xc001838c80) Create stream I0622 12:28:09.890354 7 log.go:172] (0xc000e406e0) (0xc001838c80) Stream added, broadcasting: 3 I0622 12:28:09.891384 7 log.go:172] (0xc000e406e0) Reply frame received for 3 I0622 12:28:09.891472 7 log.go:172] (0xc000e406e0) (0xc000ba4640) Create stream I0622 12:28:09.891517 7 log.go:172] (0xc000e406e0) (0xc000ba4640) Stream added, broadcasting: 5 I0622 12:28:09.892579 7 log.go:172] (0xc000e406e0) Reply frame received for 5 I0622 12:28:09.944771 7 log.go:172] (0xc000e406e0) Data frame received for 5 I0622 12:28:09.944818 7 log.go:172] (0xc000ba4640) (5) Data frame handling I0622 12:28:09.944870 7 log.go:172] (0xc000e406e0) Data frame received for 3 I0622 12:28:09.944903 7 log.go:172] (0xc001838c80) (3) Data frame handling I0622 12:28:09.944933 7 log.go:172] (0xc001838c80) (3) Data frame sent I0622 12:28:09.944953 7 log.go:172] (0xc000e406e0) Data frame received for 3 I0622 12:28:09.944966 7 log.go:172] (0xc001838c80) (3) Data frame handling I0622 12:28:09.946251 7 log.go:172] (0xc000e406e0) Data frame received for 1 I0622 12:28:09.946315 7 log.go:172] (0xc00213a640) (1) Data frame handling I0622 12:28:09.946379 7 log.go:172] (0xc00213a640) (1) Data frame sent I0622 12:28:09.946409 7 log.go:172] (0xc000e406e0) (0xc00213a640) Stream removed, broadcasting: 1 I0622 12:28:09.946434 7 log.go:172] (0xc000e406e0) Go away received I0622 12:28:09.946571 7 log.go:172] (0xc000e406e0) (0xc00213a640) Stream removed, broadcasting: 1 I0622 12:28:09.946610 7 log.go:172] (0xc000e406e0) (0xc001838c80) Stream removed, broadcasting: 3 I0622 12:28:09.946626 7 log.go:172] (0xc000e406e0) (0xc000ba4640) Stream removed, broadcasting: 5 Jun 22 12:28:09.946: INFO: Exec stderr: "" Jun 22 12:28:09.946: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:09.946: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:09.983406 7 log.go:172] (0xc000939b80) (0xc001838f00) Create stream I0622 12:28:09.983433 7 log.go:172] (0xc000939b80) (0xc001838f00) Stream added, broadcasting: 1 I0622 12:28:09.986029 7 log.go:172] (0xc000939b80) Reply frame received for 1 I0622 12:28:09.986059 7 log.go:172] (0xc000939b80) (0xc000ea2140) Create stream I0622 12:28:09.986068 7 log.go:172] (0xc000939b80) (0xc000ea2140) Stream added, broadcasting: 3 I0622 12:28:09.986878 7 log.go:172] (0xc000939b80) Reply frame received for 3 I0622 12:28:09.986899 7 log.go:172] (0xc000939b80) (0xc000ba4780) Create stream I0622 12:28:09.986905 7 log.go:172] (0xc000939b80) (0xc000ba4780) Stream added, broadcasting: 5 I0622 12:28:09.987966 7 log.go:172] (0xc000939b80) Reply frame received for 5 I0622 12:28:10.038599 7 log.go:172] (0xc000939b80) Data frame received for 3 I0622 12:28:10.038649 7 log.go:172] (0xc000ea2140) (3) Data frame handling I0622 12:28:10.038677 7 log.go:172] (0xc000ea2140) (3) Data frame sent I0622 12:28:10.038691 7 log.go:172] (0xc000939b80) Data frame received for 3 I0622 12:28:10.038701 7 log.go:172] (0xc000ea2140) (3) Data frame handling I0622 12:28:10.038744 7 log.go:172] (0xc000939b80) Data frame received for 5 I0622 12:28:10.038775 7 log.go:172] (0xc000ba4780) (5) Data frame handling I0622 12:28:10.040223 7 log.go:172] (0xc000939b80) Data frame received for 1 I0622 12:28:10.040246 7 log.go:172] (0xc001838f00) (1) Data frame handling I0622 12:28:10.040261 7 log.go:172] (0xc001838f00) (1) Data frame sent I0622 12:28:10.040297 7 log.go:172] (0xc000939b80) (0xc001838f00) Stream removed, broadcasting: 1 I0622 12:28:10.040339 7 log.go:172] (0xc000939b80) Go away received I0622 12:28:10.040412 7 log.go:172] (0xc000939b80) (0xc001838f00) Stream removed, broadcasting: 1 I0622 12:28:10.040439 7 log.go:172] (0xc000939b80) (0xc000ea2140) Stream removed, broadcasting: 3 I0622 12:28:10.040452 7 log.go:172] (0xc000939b80) (0xc000ba4780) Stream removed, broadcasting: 5 Jun 22 12:28:10.040: INFO: Exec stderr: "" Jun 22 12:28:10.040: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tflst PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:28:10.040: INFO: >>> kubeConfig: /root/.kube/config I0622 12:28:10.076621 7 log.go:172] (0xc001978840) (0xc000ea2500) Create stream I0622 12:28:10.076647 7 log.go:172] (0xc001978840) (0xc000ea2500) Stream added, broadcasting: 1 I0622 12:28:10.079601 7 log.go:172] (0xc001978840) Reply frame received for 1 I0622 12:28:10.079658 7 log.go:172] (0xc001978840) (0xc000ea2640) Create stream I0622 12:28:10.079675 7 log.go:172] (0xc001978840) (0xc000ea2640) Stream added, broadcasting: 3 I0622 12:28:10.080822 7 log.go:172] (0xc001978840) Reply frame received for 3 I0622 12:28:10.080867 7 log.go:172] (0xc001978840) (0xc00213a6e0) Create stream I0622 12:28:10.080885 7 log.go:172] (0xc001978840) (0xc00213a6e0) Stream added, broadcasting: 5 I0622 12:28:10.082009 7 log.go:172] (0xc001978840) Reply frame received for 5 I0622 12:28:10.143950 7 log.go:172] (0xc001978840) Data frame received for 5 I0622 12:28:10.143977 7 log.go:172] (0xc00213a6e0) (5) Data frame handling I0622 12:28:10.144009 7 log.go:172] (0xc001978840) Data frame received for 3 I0622 12:28:10.144047 7 log.go:172] (0xc000ea2640) (3) Data frame handling I0622 12:28:10.144068 7 log.go:172] (0xc000ea2640) (3) Data frame sent I0622 12:28:10.144234 7 log.go:172] (0xc001978840) Data frame received for 3 I0622 12:28:10.144259 7 log.go:172] (0xc000ea2640) (3) Data frame handling I0622 12:28:10.145710 7 log.go:172] (0xc001978840) Data frame received for 1 I0622 12:28:10.145735 7 log.go:172] (0xc000ea2500) (1) Data frame handling I0622 12:28:10.145756 7 log.go:172] (0xc000ea2500) (1) Data frame sent I0622 12:28:10.145792 7 log.go:172] (0xc001978840) (0xc000ea2500) Stream removed, broadcasting: 1 I0622 12:28:10.145824 7 log.go:172] (0xc001978840) Go away received I0622 12:28:10.145918 7 log.go:172] (0xc001978840) (0xc000ea2500) Stream removed, broadcasting: 1 I0622 12:28:10.145968 7 log.go:172] (0xc001978840) (0xc000ea2640) Stream removed, broadcasting: 3 I0622 12:28:10.145984 7 log.go:172] (0xc001978840) (0xc00213a6e0) Stream removed, broadcasting: 5 Jun 22 12:28:10.145: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:28:10.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-tflst" for this suite. Jun 22 12:29:12.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:29:12.199: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-tflst, resource: bindings, ignored listing per whitelist Jun 22 12:29:12.247: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-tflst deletion completed in 1m2.097239674s • [SLOW TEST:73.298 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:29:12.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 22 12:29:16.905: INFO: Successfully updated pod "annotationupdatefa6c24a0-b483-11ea-8cd8-0242ac11001b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:29:18.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gp8qh" for this suite. Jun 22 12:29:40.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:29:40.992: INFO: namespace: e2e-tests-downward-api-gp8qh, resource: bindings, ignored listing per whitelist Jun 22 12:29:41.054: INFO: namespace e2e-tests-downward-api-gp8qh deletion completed in 22.108108934s • [SLOW TEST:28.807 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:29:41.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:29:48.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hfld7" for this suite. Jun 22 12:30:10.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:30:10.327: INFO: namespace: e2e-tests-replication-controller-hfld7, resource: bindings, ignored listing per whitelist Jun 22 12:30:10.365: INFO: namespace e2e-tests-replication-controller-hfld7 deletion completed in 22.12624702s • [SLOW TEST:29.310 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:30:10.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-1d10318b-b484-11ea-8cd8-0242ac11001b STEP: Creating a pod to test consume secrets Jun 22 12:30:10.472: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-gc99v" to be "success or failure" Jun 22 12:30:10.476: INFO: Pod "pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.961527ms Jun 22 12:30:12.516: INFO: Pod "pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043689779s Jun 22 12:30:14.520: INFO: Pod "pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048113194s STEP: Saw pod success Jun 22 12:30:14.520: INFO: Pod "pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:30:14.524: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 22 12:30:14.544: INFO: Waiting for pod pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b to disappear Jun 22 12:30:14.549: INFO: Pod pod-projected-secrets-1d1278dc-b484-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:30:14.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gc99v" for this suite. Jun 22 12:30:20.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:30:20.599: INFO: namespace: e2e-tests-projected-gc99v, resource: bindings, ignored listing per whitelist Jun 22 12:30:20.649: INFO: namespace e2e-tests-projected-gc99v deletion completed in 6.096997591s • [SLOW TEST:10.284 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:30:20.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 22 12:30:20.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-projected-gxljr" to be "success or failure" Jun 22 12:30:20.789: INFO: Pod "downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702247ms Jun 22 12:30:22.793: INFO: Pod "downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00822932s Jun 22 12:30:24.797: INFO: Pod "downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012106003s STEP: Saw pod success Jun 22 12:30:24.797: INFO: Pod "downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:30:24.799: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b container client-container: STEP: delete the pod Jun 22 12:30:24.820: INFO: Waiting for pod downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b to disappear Jun 22 12:30:24.824: INFO: Pod downwardapi-volume-233744f9-b484-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:30:24.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gxljr" for this suite. Jun 22 12:30:30.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:30:30.932: INFO: namespace: e2e-tests-projected-gxljr, resource: bindings, ignored listing per whitelist Jun 22 12:30:30.935: INFO: namespace e2e-tests-projected-gxljr deletion completed in 6.10754532s • [SLOW TEST:10.286 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:30:30.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zffrq STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 12:30:31.033: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 12:30:57.214: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.208 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zffrq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:30:57.214: INFO: >>> kubeConfig: /root/.kube/config I0622 12:30:57.239926 7 log.go:172] (0xc000e12790) (0xc00230c640) Create stream I0622 12:30:57.239960 7 log.go:172] (0xc000e12790) (0xc00230c640) Stream added, broadcasting: 1 I0622 12:30:57.241867 7 log.go:172] (0xc000e12790) Reply frame received for 1 I0622 12:30:57.241940 7 log.go:172] (0xc000e12790) (0xc00240a000) Create stream I0622 12:30:57.241965 7 log.go:172] (0xc000e12790) (0xc00240a000) Stream added, broadcasting: 3 I0622 12:30:57.242835 7 log.go:172] (0xc000e12790) Reply frame received for 3 I0622 12:30:57.242876 7 log.go:172] (0xc000e12790) (0xc000e465a0) Create stream I0622 12:30:57.242889 7 log.go:172] (0xc000e12790) (0xc000e465a0) Stream added, broadcasting: 5 I0622 12:30:57.243972 7 log.go:172] (0xc000e12790) Reply frame received for 5 I0622 12:30:58.347624 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 12:30:58.347653 7 log.go:172] (0xc00240a000) (3) Data frame handling I0622 12:30:58.347688 7 log.go:172] (0xc00240a000) (3) Data frame sent I0622 12:30:58.347723 7 log.go:172] (0xc000e12790) Data frame received for 3 I0622 12:30:58.347758 7 log.go:172] (0xc00240a000) (3) Data frame handling I0622 12:30:58.347785 7 log.go:172] (0xc000e12790) Data frame received for 5 I0622 12:30:58.347807 7 log.go:172] (0xc000e465a0) (5) Data frame handling I0622 12:30:58.349862 7 log.go:172] (0xc000e12790) Data frame received for 1 I0622 12:30:58.349901 7 log.go:172] (0xc00230c640) (1) Data frame handling I0622 12:30:58.349945 7 log.go:172] (0xc00230c640) (1) Data frame sent I0622 12:30:58.350021 7 log.go:172] (0xc000e12790) (0xc00230c640) Stream removed, broadcasting: 1 I0622 12:30:58.350141 7 log.go:172] (0xc000e12790) Go away received I0622 12:30:58.350170 7 log.go:172] (0xc000e12790) (0xc00230c640) Stream removed, broadcasting: 1 I0622 12:30:58.350208 7 log.go:172] (0xc000e12790) (0xc00240a000) Stream removed, broadcasting: 3 I0622 12:30:58.350232 7 log.go:172] (0xc000e12790) (0xc000e465a0) Stream removed, broadcasting: 5 Jun 22 12:30:58.350: INFO: Found all expected endpoints: [netserver-0] Jun 22 12:30:58.353: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.173 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zffrq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 12:30:58.353: INFO: >>> kubeConfig: /root/.kube/config I0622 12:30:58.388394 7 log.go:172] (0xc001978370) (0xc00240a280) Create stream I0622 12:30:58.388420 7 log.go:172] (0xc001978370) (0xc00240a280) Stream added, broadcasting: 1 I0622 12:30:58.390407 7 log.go:172] (0xc001978370) Reply frame received for 1 I0622 12:30:58.390443 7 log.go:172] (0xc001978370) (0xc000e46640) Create stream I0622 12:30:58.390455 7 log.go:172] (0xc001978370) (0xc000e46640) Stream added, broadcasting: 3 I0622 12:30:58.391225 7 log.go:172] (0xc001978370) Reply frame received for 3 I0622 12:30:58.391251 7 log.go:172] (0xc001978370) (0xc00230c820) Create stream I0622 12:30:58.391262 7 log.go:172] (0xc001978370) (0xc00230c820) Stream added, broadcasting: 5 I0622 12:30:58.392352 7 log.go:172] (0xc001978370) Reply frame received for 5 I0622 12:30:59.467826 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 12:30:59.467869 7 log.go:172] (0xc000e46640) (3) Data frame handling I0622 12:30:59.467892 7 log.go:172] (0xc000e46640) (3) Data frame sent I0622 12:30:59.467913 7 log.go:172] (0xc001978370) Data frame received for 3 I0622 12:30:59.467930 7 log.go:172] (0xc000e46640) (3) Data frame handling I0622 12:30:59.468011 7 log.go:172] (0xc001978370) Data frame received for 5 I0622 12:30:59.468039 7 log.go:172] (0xc00230c820) (5) Data frame handling I0622 12:30:59.469811 7 log.go:172] (0xc001978370) Data frame received for 1 I0622 12:30:59.469836 7 log.go:172] (0xc00240a280) (1) Data frame handling I0622 12:30:59.469844 7 log.go:172] (0xc00240a280) (1) Data frame sent I0622 12:30:59.469853 7 log.go:172] (0xc001978370) (0xc00240a280) Stream removed, broadcasting: 1 I0622 12:30:59.469863 7 log.go:172] (0xc001978370) Go away received I0622 12:30:59.469937 7 log.go:172] (0xc001978370) (0xc00240a280) Stream removed, broadcasting: 1 I0622 12:30:59.469957 7 log.go:172] (0xc001978370) (0xc000e46640) Stream removed, broadcasting: 3 I0622 12:30:59.469965 7 log.go:172] (0xc001978370) (0xc00230c820) Stream removed, broadcasting: 5 Jun 22 12:30:59.469: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:30:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zffrq" for this suite. Jun 22 12:31:23.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:31:23.535: INFO: namespace: e2e-tests-pod-network-test-zffrq, resource: bindings, ignored listing per whitelist Jun 22 12:31:23.596: INFO: namespace e2e-tests-pod-network-test-zffrq deletion completed in 24.122438652s • [SLOW TEST:52.661 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 22 12:31:23.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 22 12:31:23.741: INFO: Waiting up to 5m0s for pod "pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b" in namespace "e2e-tests-emptydir-ftpcb" to be "success or failure" Jun 22 12:31:23.767: INFO: Pod "pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.207734ms Jun 22 12:31:25.817: INFO: Pod "pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076027596s Jun 22 12:31:27.821: INFO: Pod "pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079651423s STEP: Saw pod success Jun 22 12:31:27.821: INFO: Pod "pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b" satisfied condition "success or failure" Jun 22 12:31:27.823: INFO: Trying to get logs from node hunter-worker2 pod pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b container test-container: STEP: delete the pod Jun 22 12:31:27.845: INFO: Waiting for pod pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b to disappear Jun 22 12:31:27.924: INFO: Pod pod-48b5fd7d-b484-11ea-8cd8-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 22 12:31:27.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ftpcb" for this suite. Jun 22 12:31:33.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:31:33.968: INFO: namespace: e2e-tests-emptydir-ftpcb, resource: bindings, ignored listing per whitelist Jun 22 12:31:34.020: INFO: namespace e2e-tests-emptydir-ftpcb deletion completed in 6.090335235s • [SLOW TEST:10.423 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSJun 22 12:31:34.020: INFO: Running AfterSuite actions on all nodes Jun 22 12:31:34.020: INFO: Running AfterSuite actions on node 1 Jun 22 12:31:34.020: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6279.177 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS