I0512 09:55:14.478112 6 e2e.go:224] Starting e2e run "acf66b8d-9436-11ea-92b2-0242ac11001c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589277313 - Will randomize all specs Will run 201 of 2164 specs May 12 09:55:14.663: INFO: >>> kubeConfig: /root/.kube/config May 12 09:55:14.665: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 09:55:14.683: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 09:55:14.720: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 09:55:14.720: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 09:55:14.720: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 09:55:14.730: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 09:55:14.730: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 09:55:14.730: INFO: e2e test version: v1.13.12 May 12 09:55:14.731: INFO: kube-apiserver version: v1.13.12 SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:55:14.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 12 09:55:15.862: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 12 09:55:15.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 12 09:55:17.877: INFO: stderr: "" May 12 09:55:17.877: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:55:17.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gtk9v" for this suite. May 12 09:55:24.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:55:24.116: INFO: namespace: e2e-tests-kubectl-gtk9v, resource: bindings, ignored listing per whitelist May 12 09:55:24.121: INFO: namespace e2e-tests-kubectl-gtk9v deletion completed in 6.130497502s • [SLOW TEST:9.390 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:55:24.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0512 09:55:26.822791 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 09:55:26.822: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:55:26.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-98csd" for this suite. May 12 09:55:32.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:55:32.967: INFO: namespace: e2e-tests-gc-98csd, resource: bindings, ignored listing per whitelist May 12 09:55:33.014: INFO: namespace e2e-tests-gc-98csd deletion completed in 6.129660967s • [SLOW TEST:8.893 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:55:33.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:55:41.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-42fnm" for this suite. May 12 09:56:33.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:56:33.234: INFO: namespace: e2e-tests-kubelet-test-42fnm, resource: bindings, ignored listing per whitelist May 12 09:56:33.243: INFO: namespace e2e-tests-kubelet-test-42fnm deletion completed in 52.092169891s • [SLOW TEST:60.228 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:56:33.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 09:56:34.390: INFO: Waiting up to 5m0s for pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-jvpv8" to be "success or failure" May 12 09:56:34.640: INFO: Pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 249.718364ms May 12 09:56:36.959: INFO: Pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568763266s May 12 09:56:38.963: INFO: Pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572798495s May 12 09:56:41.206: INFO: Pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.815165496s May 12 09:56:43.210: INFO: Pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.819406689s STEP: Saw pod success May 12 09:56:43.210: INFO: Pod "pod-dceb7f36-9436-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 09:56:43.214: INFO: Trying to get logs from node hunter-worker pod pod-dceb7f36-9436-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 09:56:43.257: INFO: Waiting for pod pod-dceb7f36-9436-11ea-92b2-0242ac11001c to disappear May 12 09:56:43.264: INFO: Pod pod-dceb7f36-9436-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:56:43.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jvpv8" for this suite. May 12 09:56:51.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:56:51.340: INFO: namespace: e2e-tests-emptydir-jvpv8, resource: bindings, ignored listing per whitelist May 12 09:56:51.384: INFO: namespace e2e-tests-emptydir-jvpv8 deletion completed in 8.117220477s • [SLOW TEST:18.141 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:56:51.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qgnkh STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 09:56:51.609: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 09:57:24.000: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.166:8080/dial?request=hostName&protocol=udp&host=10.244.1.235&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-qgnkh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 09:57:24.000: INFO: >>> kubeConfig: /root/.kube/config I0512 09:57:24.023212 6 log.go:172] (0xc001af44d0) (0xc0018d6960) Create stream I0512 09:57:24.023253 6 log.go:172] (0xc001af44d0) (0xc0018d6960) Stream added, broadcasting: 1 I0512 09:57:24.025417 6 log.go:172] (0xc001af44d0) Reply frame received for 1 I0512 09:57:24.025464 6 log.go:172] (0xc001af44d0) (0xc001149180) Create stream I0512 09:57:24.025480 6 log.go:172] (0xc001af44d0) (0xc001149180) Stream added, broadcasting: 3 I0512 09:57:24.026660 6 log.go:172] (0xc001af44d0) Reply frame received for 3 I0512 09:57:24.026708 6 log.go:172] (0xc001af44d0) (0xc001149220) Create stream I0512 09:57:24.026730 6 log.go:172] (0xc001af44d0) (0xc001149220) Stream added, broadcasting: 5 I0512 09:57:24.027938 6 log.go:172] (0xc001af44d0) Reply frame received for 5 I0512 09:57:24.088531 6 log.go:172] (0xc001af44d0) Data frame received for 3 I0512 09:57:24.088573 6 log.go:172] (0xc001149180) (3) Data frame handling I0512 09:57:24.088597 6 log.go:172] (0xc001149180) (3) Data frame sent I0512 09:57:24.089604 6 log.go:172] (0xc001af44d0) Data frame received for 3 I0512 09:57:24.089621 6 log.go:172] (0xc001149180) (3) Data frame handling I0512 09:57:24.089647 6 log.go:172] (0xc001af44d0) Data frame received for 5 I0512 09:57:24.089668 6 log.go:172] (0xc001149220) (5) Data frame handling I0512 09:57:24.091983 6 log.go:172] (0xc001af44d0) Data frame received for 1 I0512 09:57:24.092004 6 log.go:172] (0xc0018d6960) (1) Data frame handling I0512 09:57:24.092014 6 log.go:172] (0xc0018d6960) (1) Data frame sent I0512 09:57:24.092030 6 log.go:172] (0xc001af44d0) (0xc0018d6960) Stream removed, broadcasting: 1 I0512 09:57:24.092053 6 log.go:172] (0xc001af44d0) Go away received I0512 09:57:24.092219 6 log.go:172] (0xc001af44d0) (0xc0018d6960) Stream removed, broadcasting: 1 I0512 09:57:24.092241 6 log.go:172] (0xc001af44d0) (0xc001149180) Stream removed, broadcasting: 3 I0512 09:57:24.092255 6 log.go:172] (0xc001af44d0) (0xc001149220) Stream removed, broadcasting: 5 May 12 09:57:24.092: INFO: Waiting for endpoints: map[] May 12 09:57:24.372: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.166:8080/dial?request=hostName&protocol=udp&host=10.244.2.165&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-qgnkh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 09:57:24.372: INFO: >>> kubeConfig: /root/.kube/config I0512 09:57:24.397088 6 log.go:172] (0xc001af49a0) (0xc0018d6be0) Create stream I0512 09:57:24.397274 6 log.go:172] (0xc001af49a0) (0xc0018d6be0) Stream added, broadcasting: 1 I0512 09:57:24.399862 6 log.go:172] (0xc001af49a0) Reply frame received for 1 I0512 09:57:24.399905 6 log.go:172] (0xc001af49a0) (0xc0016c0fa0) Create stream I0512 09:57:24.399924 6 log.go:172] (0xc001af49a0) (0xc0016c0fa0) Stream added, broadcasting: 3 I0512 09:57:24.401357 6 log.go:172] (0xc001af49a0) Reply frame received for 3 I0512 09:57:24.401419 6 log.go:172] (0xc001af49a0) (0xc0011492c0) Create stream I0512 09:57:24.401455 6 log.go:172] (0xc001af49a0) (0xc0011492c0) Stream added, broadcasting: 5 I0512 09:57:24.402490 6 log.go:172] (0xc001af49a0) Reply frame received for 5 I0512 09:57:24.470973 6 log.go:172] (0xc001af49a0) Data frame received for 3 I0512 09:57:24.470998 6 log.go:172] (0xc0016c0fa0) (3) Data frame handling I0512 09:57:24.471013 6 log.go:172] (0xc0016c0fa0) (3) Data frame sent I0512 09:57:24.471374 6 log.go:172] (0xc001af49a0) Data frame received for 5 I0512 09:57:24.471398 6 log.go:172] (0xc0011492c0) (5) Data frame handling I0512 09:57:24.471471 6 log.go:172] (0xc001af49a0) Data frame received for 3 I0512 09:57:24.471486 6 log.go:172] (0xc0016c0fa0) (3) Data frame handling I0512 09:57:24.472907 6 log.go:172] (0xc001af49a0) Data frame received for 1 I0512 09:57:24.472963 6 log.go:172] (0xc0018d6be0) (1) Data frame handling I0512 09:57:24.472993 6 log.go:172] (0xc0018d6be0) (1) Data frame sent I0512 09:57:24.473017 6 log.go:172] (0xc001af49a0) (0xc0018d6be0) Stream removed, broadcasting: 1 I0512 09:57:24.473040 6 log.go:172] (0xc001af49a0) Go away received I0512 09:57:24.473106 6 log.go:172] (0xc001af49a0) (0xc0018d6be0) Stream removed, broadcasting: 1 I0512 09:57:24.473287 6 log.go:172] (0xc001af49a0) (0xc0016c0fa0) Stream removed, broadcasting: 3 I0512 09:57:24.473322 6 log.go:172] (0xc001af49a0) (0xc0011492c0) Stream removed, broadcasting: 5 May 12 09:57:24.473: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:57:24.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-qgnkh" for this suite. May 12 09:58:01.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:58:01.087: INFO: namespace: e2e-tests-pod-network-test-qgnkh, resource: bindings, ignored listing per whitelist May 12 09:58:01.142: INFO: namespace e2e-tests-pod-network-test-qgnkh deletion completed in 36.665603492s • [SLOW TEST:69.758 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:58:01.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-284cl/configmap-test-10b74a9e-9437-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 09:58:01.291: INFO: Waiting up to 5m0s for pod "pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-284cl" to be "success or failure" May 12 09:58:01.295: INFO: Pod "pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060135ms May 12 09:58:03.364: INFO: Pod "pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073401083s May 12 09:58:05.402: INFO: Pod "pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.111162452s May 12 09:58:07.405: INFO: Pod "pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113903152s STEP: Saw pod success May 12 09:58:07.405: INFO: Pod "pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 09:58:07.407: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c container env-test: STEP: delete the pod May 12 09:58:07.453: INFO: Waiting for pod pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c to disappear May 12 09:58:07.475: INFO: Pod pod-configmaps-10b815e8-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:58:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-284cl" for this suite. May 12 09:58:13.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:58:13.502: INFO: namespace: e2e-tests-configmap-284cl, resource: bindings, ignored listing per whitelist May 12 09:58:13.572: INFO: namespace e2e-tests-configmap-284cl deletion completed in 6.094863707s • [SLOW TEST:12.430 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:58:13.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vt8qm May 12 09:58:19.858: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vt8qm STEP: checking the pod's current state and verifying that restartCount is present May 12 09:58:19.860: INFO: Initial restart count of pod liveness-http is 0 May 12 09:58:36.273: INFO: Restart count of pod e2e-tests-container-probe-vt8qm/liveness-http is now 1 (16.412595233s elapsed) May 12 09:58:54.765: INFO: Restart count of pod e2e-tests-container-probe-vt8qm/liveness-http is now 2 (34.904996101s elapsed) May 12 09:59:14.803: INFO: Restart count of pod e2e-tests-container-probe-vt8qm/liveness-http is now 3 (54.942216302s elapsed) May 12 09:59:33.349: INFO: Restart count of pod e2e-tests-container-probe-vt8qm/liveness-http is now 4 (1m13.488549868s elapsed) May 12 10:00:37.525: INFO: Restart count of pod e2e-tests-container-probe-vt8qm/liveness-http is now 5 (2m17.664490458s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:00:37.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vt8qm" for this suite. May 12 10:00:43.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:00:43.627: INFO: namespace: e2e-tests-container-probe-vt8qm, resource: bindings, ignored listing per whitelist May 12 10:00:43.641: INFO: namespace e2e-tests-container-probe-vt8qm deletion completed in 6.078616767s • [SLOW TEST:150.068 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:00:43.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 10:00:44.034: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:00:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9sp6q" for this suite. May 12 10:01:17.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:01:17.611: INFO: namespace: e2e-tests-init-container-9sp6q, resource: bindings, ignored listing per whitelist May 12 10:01:17.625: INFO: namespace e2e-tests-init-container-9sp6q deletion completed in 24.181519342s • [SLOW TEST:33.983 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:01:17.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 12 10:01:18.111: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 10:01:18.254: INFO: Waiting for terminating namespaces to be deleted... May 12 10:01:18.257: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 12 10:01:18.260: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 12 10:01:18.260: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:01:18.260: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 10:01:18.260: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:01:18.260: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 10:01:18.260: INFO: Container coredns ready: true, restart count 0 May 12 10:01:18.260: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 12 10:01:18.264: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 10:01:18.264: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:01:18.264: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 10:01:18.264: INFO: Container coredns ready: true, restart count 0 May 12 10:01:18.264: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 10:01:18.264: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-89becc14-9437-11ea-92b2-0242ac11001c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-89becc14-9437-11ea-92b2-0242ac11001c off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-89becc14-9437-11ea-92b2-0242ac11001c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:01:28.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-hx9rf" for this suite. May 12 10:01:42.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:01:42.492: INFO: namespace: e2e-tests-sched-pred-hx9rf, resource: bindings, ignored listing per whitelist May 12 10:01:42.630: INFO: namespace e2e-tests-sched-pred-hx9rf deletion completed in 14.167453418s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:25.005 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:01:42.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:01:42.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-v5fgn" to be "success or failure" May 12 10:01:42.916: INFO: Pod "downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.699133ms May 12 10:01:44.920: INFO: Pod "downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023586335s May 12 10:01:46.924: INFO: Pod "downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027683787s STEP: Saw pod success May 12 10:01:46.924: INFO: Pod "downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:01:46.927: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:01:47.001: INFO: Waiting for pod downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c to disappear May 12 10:01:47.082: INFO: Pod downwardapi-volume-94cee893-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:01:47.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v5fgn" for this suite. May 12 10:01:55.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:01:55.259: INFO: namespace: e2e-tests-downward-api-v5fgn, resource: bindings, ignored listing per whitelist May 12 10:01:55.276: INFO: namespace e2e-tests-downward-api-v5fgn deletion completed in 8.189802323s • [SLOW TEST:12.645 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:01:55.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 10:02:26.427034 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 10:02:26.427: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:02:26.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-d4s7k" for this suite. May 12 10:02:32.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:02:32.458: INFO: namespace: e2e-tests-gc-d4s7k, resource: bindings, ignored listing per whitelist May 12 10:02:32.511: INFO: namespace e2e-tests-gc-d4s7k deletion completed in 6.082430961s • [SLOW TEST:37.236 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:02:32.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-qsqwn I0512 10:02:33.261009 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-qsqwn, replica count: 1 I0512 10:02:34.311403 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:02:35.311589 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:02:36.311784 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:02:37.311991 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:02:38.312199 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:02:38.550: INFO: Created: latency-svc-nxcvw May 12 10:02:38.560: INFO: Got endpoints: latency-svc-nxcvw [148.337625ms] May 12 10:02:38.636: INFO: Created: latency-svc-thql5 May 12 10:02:38.675: INFO: Got endpoints: latency-svc-thql5 [115.000474ms] May 12 10:02:38.710: INFO: Created: latency-svc-gtp8q May 12 10:02:38.728: INFO: Got endpoints: latency-svc-gtp8q [167.303629ms] May 12 10:02:38.775: INFO: Created: latency-svc-2lpjb May 12 10:02:38.845: INFO: Got endpoints: latency-svc-2lpjb [284.299757ms] May 12 10:02:38.846: INFO: Created: latency-svc-4dwd9 May 12 10:02:38.861: INFO: Got endpoints: latency-svc-4dwd9 [299.925088ms] May 12 10:02:38.883: INFO: Created: latency-svc-mzqn2 May 12 10:02:38.891: INFO: Got endpoints: latency-svc-mzqn2 [329.815362ms] May 12 10:02:38.963: INFO: Created: latency-svc-bhgdv May 12 10:02:38.965: INFO: Got endpoints: latency-svc-bhgdv [404.588737ms] May 12 10:02:39.033: INFO: Created: latency-svc-l6phn May 12 10:02:39.172: INFO: Got endpoints: latency-svc-l6phn [611.492226ms] May 12 10:02:39.177: INFO: Created: latency-svc-sw794 May 12 10:02:39.187: INFO: Got endpoints: latency-svc-sw794 [625.982398ms] May 12 10:02:39.265: INFO: Created: latency-svc-dhmqq May 12 10:02:39.358: INFO: Got endpoints: latency-svc-dhmqq [796.857416ms] May 12 10:02:39.373: INFO: Created: latency-svc-pmr9r May 12 10:02:39.392: INFO: Got endpoints: latency-svc-pmr9r [830.946796ms] May 12 10:02:39.563: INFO: Created: latency-svc-wtmgg May 12 10:02:39.589: INFO: Got endpoints: latency-svc-wtmgg [1.02791927s] May 12 10:02:39.711: INFO: Created: latency-svc-l2rwd May 12 10:02:39.727: INFO: Got endpoints: latency-svc-l2rwd [1.166095528s] May 12 10:02:39.790: INFO: Created: latency-svc-f6frj May 12 10:02:39.805: INFO: Got endpoints: latency-svc-f6frj [1.243989383s] May 12 10:02:39.879: INFO: Created: latency-svc-bz49b May 12 10:02:39.914: INFO: Got endpoints: latency-svc-bz49b [1.352687976s] May 12 10:02:39.964: INFO: Created: latency-svc-njckd May 12 10:02:39.977: INFO: Got endpoints: latency-svc-njckd [1.416421688s] May 12 10:02:40.018: INFO: Created: latency-svc-chfrx May 12 10:02:40.034: INFO: Got endpoints: latency-svc-chfrx [1.358548859s] May 12 10:02:40.064: INFO: Created: latency-svc-jj4zf May 12 10:02:40.070: INFO: Got endpoints: latency-svc-jj4zf [1.342585762s] May 12 10:02:40.094: INFO: Created: latency-svc-l5qg7 May 12 10:02:40.107: INFO: Got endpoints: latency-svc-l5qg7 [1.261848678s] May 12 10:02:40.160: INFO: Created: latency-svc-b98fk May 12 10:02:40.163: INFO: Got endpoints: latency-svc-b98fk [1.302133377s] May 12 10:02:40.198: INFO: Created: latency-svc-vgf95 May 12 10:02:40.216: INFO: Got endpoints: latency-svc-vgf95 [1.325144367s] May 12 10:02:40.251: INFO: Created: latency-svc-7smdx May 12 10:02:40.328: INFO: Got endpoints: latency-svc-7smdx [1.362150644s] May 12 10:02:40.354: INFO: Created: latency-svc-qjqbh May 12 10:02:40.373: INFO: Got endpoints: latency-svc-qjqbh [1.200217951s] May 12 10:02:40.396: INFO: Created: latency-svc-tq7dd May 12 10:02:40.415: INFO: Got endpoints: latency-svc-tq7dd [1.228551068s] May 12 10:02:40.466: INFO: Created: latency-svc-9qp6l May 12 10:02:40.469: INFO: Got endpoints: latency-svc-9qp6l [1.111021936s] May 12 10:02:40.498: INFO: Created: latency-svc-d7jjf May 12 10:02:40.522: INFO: Got endpoints: latency-svc-d7jjf [1.13020597s] May 12 10:02:40.559: INFO: Created: latency-svc-rl66j May 12 10:02:40.621: INFO: Got endpoints: latency-svc-rl66j [1.032243245s] May 12 10:02:40.660: INFO: Created: latency-svc-4cblz May 12 10:02:40.789: INFO: Got endpoints: latency-svc-4cblz [1.062719668s] May 12 10:02:40.794: INFO: Created: latency-svc-68t72 May 12 10:02:40.801: INFO: Got endpoints: latency-svc-68t72 [996.3073ms] May 12 10:02:40.827: INFO: Created: latency-svc-jfhz8 May 12 10:02:40.861: INFO: Got endpoints: latency-svc-jfhz8 [947.631321ms] May 12 10:02:40.941: INFO: Created: latency-svc-9fs6t May 12 10:02:40.943: INFO: Got endpoints: latency-svc-9fs6t [966.16778ms] May 12 10:02:40.991: INFO: Created: latency-svc-4tsgr May 12 10:02:41.030: INFO: Got endpoints: latency-svc-4tsgr [996.0696ms] May 12 10:02:41.106: INFO: Created: latency-svc-7klnr May 12 10:02:41.128: INFO: Got endpoints: latency-svc-7klnr [1.0576397s] May 12 10:02:41.159: INFO: Created: latency-svc-kbngj May 12 10:02:41.174: INFO: Got endpoints: latency-svc-kbngj [1.067264174s] May 12 10:02:41.194: INFO: Created: latency-svc-gp5x6 May 12 10:02:41.289: INFO: Got endpoints: latency-svc-gp5x6 [1.126169206s] May 12 10:02:41.289: INFO: Created: latency-svc-lmq5n May 12 10:02:41.307: INFO: Got endpoints: latency-svc-lmq5n [1.091113651s] May 12 10:02:41.339: INFO: Created: latency-svc-twmd6 May 12 10:02:41.355: INFO: Got endpoints: latency-svc-twmd6 [1.027649763s] May 12 10:02:41.430: INFO: Created: latency-svc-5wzjp May 12 10:02:41.439: INFO: Got endpoints: latency-svc-5wzjp [1.066699753s] May 12 10:02:41.486: INFO: Created: latency-svc-nrg76 May 12 10:02:41.524: INFO: Got endpoints: latency-svc-nrg76 [1.108792815s] May 12 10:02:41.591: INFO: Created: latency-svc-zz9c6 May 12 10:02:41.596: INFO: Got endpoints: latency-svc-zz9c6 [1.127414446s] May 12 10:02:41.626: INFO: Created: latency-svc-svbnm May 12 10:02:41.644: INFO: Got endpoints: latency-svc-svbnm [1.122632716s] May 12 10:02:41.668: INFO: Created: latency-svc-9hbrj May 12 10:02:41.682: INFO: Got endpoints: latency-svc-9hbrj [1.06078808s] May 12 10:02:41.741: INFO: Created: latency-svc-6d5sd May 12 10:02:41.743: INFO: Got endpoints: latency-svc-6d5sd [953.629438ms] May 12 10:02:41.786: INFO: Created: latency-svc-wcmcr May 12 10:02:41.802: INFO: Got endpoints: latency-svc-wcmcr [1.000876434s] May 12 10:02:41.993: INFO: Created: latency-svc-6rks8 May 12 10:02:42.050: INFO: Got endpoints: latency-svc-6rks8 [1.18904755s] May 12 10:02:42.191: INFO: Created: latency-svc-2fdm6 May 12 10:02:42.193: INFO: Got endpoints: latency-svc-2fdm6 [1.249653744s] May 12 10:02:42.424: INFO: Created: latency-svc-7grkw May 12 10:02:42.432: INFO: Got endpoints: latency-svc-7grkw [1.402012028s] May 12 10:02:42.725: INFO: Created: latency-svc-ng92l May 12 10:02:42.727: INFO: Got endpoints: latency-svc-ng92l [1.598472636s] May 12 10:02:42.891: INFO: Created: latency-svc-9zvxk May 12 10:02:43.310: INFO: Got endpoints: latency-svc-9zvxk [2.135678856s] May 12 10:02:43.658: INFO: Created: latency-svc-rhcj2 May 12 10:02:43.662: INFO: Got endpoints: latency-svc-rhcj2 [2.372841212s] May 12 10:02:43.880: INFO: Created: latency-svc-gk4vj May 12 10:02:43.883: INFO: Got endpoints: latency-svc-gk4vj [2.575891087s] May 12 10:02:44.347: INFO: Created: latency-svc-k8bmt May 12 10:02:44.363: INFO: Got endpoints: latency-svc-k8bmt [3.00771334s] May 12 10:02:44.590: INFO: Created: latency-svc-xb9kt May 12 10:02:44.604: INFO: Got endpoints: latency-svc-xb9kt [3.164082731s] May 12 10:02:44.802: INFO: Created: latency-svc-dcdqz May 12 10:02:44.805: INFO: Got endpoints: latency-svc-dcdqz [3.281300272s] May 12 10:02:44.869: INFO: Created: latency-svc-vtwtn May 12 10:02:45.028: INFO: Got endpoints: latency-svc-vtwtn [3.432104026s] May 12 10:02:45.035: INFO: Created: latency-svc-tnm7r May 12 10:02:45.072: INFO: Got endpoints: latency-svc-tnm7r [3.427108088s] May 12 10:02:45.205: INFO: Created: latency-svc-ms8vx May 12 10:02:45.246: INFO: Got endpoints: latency-svc-ms8vx [3.564018152s] May 12 10:02:45.364: INFO: Created: latency-svc-2wxl7 May 12 10:02:45.388: INFO: Got endpoints: latency-svc-2wxl7 [3.644888293s] May 12 10:02:45.418: INFO: Created: latency-svc-zllt2 May 12 10:02:45.457: INFO: Got endpoints: latency-svc-zllt2 [3.655051928s] May 12 10:02:45.526: INFO: Created: latency-svc-5mhj7 May 12 10:02:45.541: INFO: Got endpoints: latency-svc-5mhj7 [3.490133494s] May 12 10:02:45.562: INFO: Created: latency-svc-vwm4r May 12 10:02:45.601: INFO: Got endpoints: latency-svc-vwm4r [3.408564873s] May 12 10:02:45.697: INFO: Created: latency-svc-wddfw May 12 10:02:45.697: INFO: Got endpoints: latency-svc-wddfw [3.264731807s] May 12 10:02:45.742: INFO: Created: latency-svc-fmx9d May 12 10:02:45.758: INFO: Got endpoints: latency-svc-fmx9d [3.031392445s] May 12 10:02:45.894: INFO: Created: latency-svc-8m7zd May 12 10:02:45.951: INFO: Got endpoints: latency-svc-8m7zd [2.640731945s] May 12 10:02:46.077: INFO: Created: latency-svc-prwhn May 12 10:02:46.107: INFO: Got endpoints: latency-svc-prwhn [2.444677362s] May 12 10:02:46.257: INFO: Created: latency-svc-7cqrg May 12 10:02:46.284: INFO: Created: latency-svc-zjc2j May 12 10:02:46.285: INFO: Got endpoints: latency-svc-7cqrg [2.401951889s] May 12 10:02:46.299: INFO: Got endpoints: latency-svc-zjc2j [1.935621044s] May 12 10:02:46.332: INFO: Created: latency-svc-87lzz May 12 10:02:46.347: INFO: Got endpoints: latency-svc-87lzz [1.743545703s] May 12 10:02:46.406: INFO: Created: latency-svc-szvqj May 12 10:02:46.420: INFO: Got endpoints: latency-svc-szvqj [1.61492005s] May 12 10:02:46.474: INFO: Created: latency-svc-hk5b2 May 12 10:02:46.549: INFO: Got endpoints: latency-svc-hk5b2 [1.520544459s] May 12 10:02:46.554: INFO: Created: latency-svc-9whfp May 12 10:02:46.571: INFO: Got endpoints: latency-svc-9whfp [1.498990709s] May 12 10:02:46.596: INFO: Created: latency-svc-bhhcx May 12 10:02:46.607: INFO: Got endpoints: latency-svc-bhhcx [1.360440127s] May 12 10:02:46.636: INFO: Created: latency-svc-fdjl2 May 12 10:02:46.753: INFO: Got endpoints: latency-svc-fdjl2 [1.364879249s] May 12 10:02:46.776: INFO: Created: latency-svc-bmgrh May 12 10:02:46.793: INFO: Got endpoints: latency-svc-bmgrh [1.336119847s] May 12 10:02:46.921: INFO: Created: latency-svc-gmv7t May 12 10:02:46.986: INFO: Got endpoints: latency-svc-gmv7t [1.445159413s] May 12 10:02:47.094: INFO: Created: latency-svc-5q4h5 May 12 10:02:47.136: INFO: Got endpoints: latency-svc-5q4h5 [1.534513007s] May 12 10:02:47.359: INFO: Created: latency-svc-xw42j May 12 10:02:47.363: INFO: Got endpoints: latency-svc-xw42j [1.665518987s] May 12 10:02:47.514: INFO: Created: latency-svc-wzmwn May 12 10:02:47.544: INFO: Got endpoints: latency-svc-wzmwn [1.785883627s] May 12 10:02:47.603: INFO: Created: latency-svc-q7z9b May 12 10:02:47.693: INFO: Got endpoints: latency-svc-q7z9b [1.742440482s] May 12 10:02:47.707: INFO: Created: latency-svc-c8m59 May 12 10:02:47.730: INFO: Got endpoints: latency-svc-c8m59 [1.623766614s] May 12 10:02:47.786: INFO: Created: latency-svc-2b7cc May 12 10:02:47.855: INFO: Got endpoints: latency-svc-2b7cc [1.570106043s] May 12 10:02:47.863: INFO: Created: latency-svc-bz7dd May 12 10:02:47.904: INFO: Got endpoints: latency-svc-bz7dd [1.604893692s] May 12 10:02:47.953: INFO: Created: latency-svc-sdnfg May 12 10:02:48.028: INFO: Got endpoints: latency-svc-sdnfg [1.681011898s] May 12 10:02:48.044: INFO: Created: latency-svc-c8ms5 May 12 10:02:48.071: INFO: Got endpoints: latency-svc-c8ms5 [1.650396784s] May 12 10:02:48.072: INFO: Created: latency-svc-5sptv May 12 10:02:48.086: INFO: Got endpoints: latency-svc-5sptv [1.536962958s] May 12 10:02:48.113: INFO: Created: latency-svc-9xgr5 May 12 10:02:48.202: INFO: Got endpoints: latency-svc-9xgr5 [1.631569492s] May 12 10:02:48.206: INFO: Created: latency-svc-wwps8 May 12 10:02:48.225: INFO: Got endpoints: latency-svc-wwps8 [1.618670277s] May 12 10:02:48.277: INFO: Created: latency-svc-88zpm May 12 10:02:48.397: INFO: Got endpoints: latency-svc-88zpm [1.643653822s] May 12 10:02:48.427: INFO: Created: latency-svc-2lj4m May 12 10:02:48.454: INFO: Got endpoints: latency-svc-2lj4m [1.660132493s] May 12 10:02:48.533: INFO: Created: latency-svc-bmcxm May 12 10:02:48.544: INFO: Got endpoints: latency-svc-bmcxm [1.558046416s] May 12 10:02:48.570: INFO: Created: latency-svc-pqqxw May 12 10:02:48.613: INFO: Created: latency-svc-zrtq2 May 12 10:02:48.685: INFO: Got endpoints: latency-svc-pqqxw [1.54874453s] May 12 10:02:48.686: INFO: Created: latency-svc-89qbh May 12 10:02:48.701: INFO: Got endpoints: latency-svc-89qbh [1.156838948s] May 12 10:02:48.731: INFO: Got endpoints: latency-svc-zrtq2 [1.368222363s] May 12 10:02:48.731: INFO: Created: latency-svc-zvnwk May 12 10:02:48.767: INFO: Got endpoints: latency-svc-zvnwk [1.073345242s] May 12 10:02:48.831: INFO: Created: latency-svc-pf9k2 May 12 10:02:48.845: INFO: Got endpoints: latency-svc-pf9k2 [1.115053452s] May 12 10:02:48.895: INFO: Created: latency-svc-4wcr4 May 12 10:02:48.924: INFO: Got endpoints: latency-svc-4wcr4 [1.06876229s] May 12 10:02:48.995: INFO: Created: latency-svc-bn85v May 12 10:02:49.002: INFO: Got endpoints: latency-svc-bn85v [1.098119939s] May 12 10:02:49.037: INFO: Created: latency-svc-vl826 May 12 10:02:49.074: INFO: Got endpoints: latency-svc-vl826 [1.04597951s] May 12 10:02:49.159: INFO: Created: latency-svc-d9nht May 12 10:02:49.164: INFO: Got endpoints: latency-svc-d9nht [1.093681795s] May 12 10:02:49.217: INFO: Created: latency-svc-5wtqz May 12 10:02:49.232: INFO: Got endpoints: latency-svc-5wtqz [1.145647384s] May 12 10:02:49.317: INFO: Created: latency-svc-6dznv May 12 10:02:49.319: INFO: Got endpoints: latency-svc-6dznv [1.116863915s] May 12 10:02:49.363: INFO: Created: latency-svc-jlpqt May 12 10:02:49.382: INFO: Got endpoints: latency-svc-jlpqt [1.156833843s] May 12 10:02:49.403: INFO: Created: latency-svc-4hjwv May 12 10:02:49.460: INFO: Got endpoints: latency-svc-4hjwv [1.062893971s] May 12 10:02:49.483: INFO: Created: latency-svc-qtps7 May 12 10:02:49.497: INFO: Got endpoints: latency-svc-qtps7 [1.042853417s] May 12 10:02:49.531: INFO: Created: latency-svc-xvpt5 May 12 10:02:49.610: INFO: Got endpoints: latency-svc-xvpt5 [1.065632752s] May 12 10:02:49.625: INFO: Created: latency-svc-kq25j May 12 10:02:49.641: INFO: Got endpoints: latency-svc-kq25j [956.33645ms] May 12 10:02:49.666: INFO: Created: latency-svc-56dsb May 12 10:02:49.684: INFO: Got endpoints: latency-svc-56dsb [982.781004ms] May 12 10:02:49.703: INFO: Created: latency-svc-22pn8 May 12 10:02:49.747: INFO: Got endpoints: latency-svc-22pn8 [1.016470108s] May 12 10:02:49.777: INFO: Created: latency-svc-97brk May 12 10:02:49.793: INFO: Got endpoints: latency-svc-97brk [1.025924848s] May 12 10:02:49.831: INFO: Created: latency-svc-m9tf6 May 12 10:02:49.896: INFO: Got endpoints: latency-svc-m9tf6 [1.050738707s] May 12 10:02:49.898: INFO: Created: latency-svc-4g68x May 12 10:02:49.913: INFO: Got endpoints: latency-svc-4g68x [989.258575ms] May 12 10:02:49.943: INFO: Created: latency-svc-d8dlv May 12 10:02:49.961: INFO: Got endpoints: latency-svc-d8dlv [959.477446ms] May 12 10:02:50.065: INFO: Created: latency-svc-gqf8v May 12 10:02:50.067: INFO: Got endpoints: latency-svc-gqf8v [992.84626ms] May 12 10:02:50.160: INFO: Created: latency-svc-9g79h May 12 10:02:50.220: INFO: Got endpoints: latency-svc-9g79h [1.055274765s] May 12 10:02:50.225: INFO: Created: latency-svc-mccbb May 12 10:02:50.239: INFO: Got endpoints: latency-svc-mccbb [1.006661247s] May 12 10:02:50.279: INFO: Created: latency-svc-h6hjm May 12 10:02:50.310: INFO: Got endpoints: latency-svc-h6hjm [990.835864ms] May 12 10:02:50.443: INFO: Created: latency-svc-z7qjh May 12 10:02:50.501: INFO: Got endpoints: latency-svc-z7qjh [1.119074305s] May 12 10:02:50.640: INFO: Created: latency-svc-9mjn6 May 12 10:02:50.668: INFO: Got endpoints: latency-svc-9mjn6 [1.208552663s] May 12 10:02:50.731: INFO: Created: latency-svc-w2dm5 May 12 10:02:50.831: INFO: Got endpoints: latency-svc-w2dm5 [1.334392573s] May 12 10:02:50.849: INFO: Created: latency-svc-mslfl May 12 10:02:50.893: INFO: Got endpoints: latency-svc-mslfl [1.283202992s] May 12 10:02:50.994: INFO: Created: latency-svc-tnh9z May 12 10:02:51.020: INFO: Got endpoints: latency-svc-tnh9z [1.378731447s] May 12 10:02:51.055: INFO: Created: latency-svc-2x7mb May 12 10:02:51.075: INFO: Got endpoints: latency-svc-2x7mb [1.391016108s] May 12 10:02:51.210: INFO: Created: latency-svc-sc8cj May 12 10:02:51.255: INFO: Got endpoints: latency-svc-sc8cj [1.507328678s] May 12 10:02:51.406: INFO: Created: latency-svc-k445x May 12 10:02:51.410: INFO: Got endpoints: latency-svc-k445x [1.616828695s] May 12 10:02:51.488: INFO: Created: latency-svc-wlldw May 12 10:02:51.496: INFO: Got endpoints: latency-svc-wlldw [1.59930365s] May 12 10:02:51.565: INFO: Created: latency-svc-7m9x6 May 12 10:02:51.574: INFO: Got endpoints: latency-svc-7m9x6 [1.660330978s] May 12 10:02:51.603: INFO: Created: latency-svc-fn9dx May 12 10:02:51.653: INFO: Got endpoints: latency-svc-fn9dx [1.691912256s] May 12 10:02:51.724: INFO: Created: latency-svc-9cpl2 May 12 10:02:51.726: INFO: Got endpoints: latency-svc-9cpl2 [1.658996594s] May 12 10:02:51.764: INFO: Created: latency-svc-h5mmr May 12 10:02:51.779: INFO: Got endpoints: latency-svc-h5mmr [1.559211153s] May 12 10:02:51.800: INFO: Created: latency-svc-zntj7 May 12 10:02:51.891: INFO: Got endpoints: latency-svc-zntj7 [1.652466609s] May 12 10:02:51.907: INFO: Created: latency-svc-7jj2h May 12 10:02:51.917: INFO: Got endpoints: latency-svc-7jj2h [1.607386897s] May 12 10:02:51.948: INFO: Created: latency-svc-hklvw May 12 10:02:51.978: INFO: Got endpoints: latency-svc-hklvw [1.477075427s] May 12 10:02:52.070: INFO: Created: latency-svc-hvvp9 May 12 10:02:52.086: INFO: Got endpoints: latency-svc-hvvp9 [1.417688986s] May 12 10:02:52.116: INFO: Created: latency-svc-89mjc May 12 10:02:52.135: INFO: Got endpoints: latency-svc-89mjc [1.30329516s] May 12 10:02:52.215: INFO: Created: latency-svc-vxf6h May 12 10:02:52.217: INFO: Got endpoints: latency-svc-vxf6h [1.324580577s] May 12 10:02:52.245: INFO: Created: latency-svc-q8hkx May 12 10:02:52.262: INFO: Got endpoints: latency-svc-q8hkx [1.241878614s] May 12 10:02:52.285: INFO: Created: latency-svc-zmv6c May 12 10:02:52.394: INFO: Got endpoints: latency-svc-zmv6c [1.31908336s] May 12 10:02:52.611: INFO: Created: latency-svc-4zhqw May 12 10:02:52.622: INFO: Got endpoints: latency-svc-4zhqw [1.366765786s] May 12 10:02:52.656: INFO: Created: latency-svc-nddvp May 12 10:02:52.664: INFO: Got endpoints: latency-svc-nddvp [1.25405481s] May 12 10:02:52.742: INFO: Created: latency-svc-tnnhp May 12 10:02:52.749: INFO: Got endpoints: latency-svc-tnnhp [1.253270182s] May 12 10:02:52.782: INFO: Created: latency-svc-ctqmq May 12 10:02:52.790: INFO: Got endpoints: latency-svc-ctqmq [1.216741698s] May 12 10:02:52.818: INFO: Created: latency-svc-m7cr8 May 12 10:02:52.834: INFO: Got endpoints: latency-svc-m7cr8 [1.180318615s] May 12 10:02:52.934: INFO: Created: latency-svc-7s5pp May 12 10:02:52.978: INFO: Got endpoints: latency-svc-7s5pp [1.251639931s] May 12 10:02:53.022: INFO: Created: latency-svc-scmrr May 12 10:02:53.031: INFO: Got endpoints: latency-svc-scmrr [1.252204974s] May 12 10:02:53.124: INFO: Created: latency-svc-n57ft May 12 10:02:53.147: INFO: Got endpoints: latency-svc-n57ft [1.255602052s] May 12 10:02:53.220: INFO: Created: latency-svc-5fnsv May 12 10:02:53.224: INFO: Got endpoints: latency-svc-5fnsv [1.306504421s] May 12 10:02:53.252: INFO: Created: latency-svc-2st49 May 12 10:02:53.279: INFO: Got endpoints: latency-svc-2st49 [1.300801245s] May 12 10:02:53.310: INFO: Created: latency-svc-5h8l9 May 12 10:02:53.352: INFO: Got endpoints: latency-svc-5h8l9 [1.265605563s] May 12 10:02:53.394: INFO: Created: latency-svc-wq9zw May 12 10:02:53.406: INFO: Got endpoints: latency-svc-wq9zw [1.270945819s] May 12 10:02:53.432: INFO: Created: latency-svc-k7vnz May 12 10:02:53.448: INFO: Got endpoints: latency-svc-k7vnz [1.230583987s] May 12 10:02:53.574: INFO: Created: latency-svc-x4jdc May 12 10:02:53.577: INFO: Got endpoints: latency-svc-x4jdc [1.314820735s] May 12 10:02:53.808: INFO: Created: latency-svc-dcgdz May 12 10:02:53.815: INFO: Got endpoints: latency-svc-dcgdz [1.420957509s] May 12 10:02:54.034: INFO: Created: latency-svc-ghd9q May 12 10:02:54.066: INFO: Got endpoints: latency-svc-ghd9q [1.444211954s] May 12 10:02:54.251: INFO: Created: latency-svc-dxrmh May 12 10:02:54.338: INFO: Got endpoints: latency-svc-dxrmh [1.673904238s] May 12 10:02:54.446: INFO: Created: latency-svc-stks9 May 12 10:02:54.487: INFO: Got endpoints: latency-svc-stks9 [1.737751818s] May 12 10:02:54.677: INFO: Created: latency-svc-89zbf May 12 10:02:54.679: INFO: Got endpoints: latency-svc-89zbf [1.88851635s] May 12 10:02:54.904: INFO: Created: latency-svc-rw9zc May 12 10:02:54.907: INFO: Got endpoints: latency-svc-rw9zc [2.073172307s] May 12 10:02:55.077: INFO: Created: latency-svc-8wbx5 May 12 10:02:55.621: INFO: Got endpoints: latency-svc-8wbx5 [2.64346193s] May 12 10:02:55.629: INFO: Created: latency-svc-8d79g May 12 10:02:55.686: INFO: Got endpoints: latency-svc-8d79g [2.654785995s] May 12 10:02:55.879: INFO: Created: latency-svc-585h7 May 12 10:02:55.882: INFO: Got endpoints: latency-svc-585h7 [2.735653912s] May 12 10:02:55.938: INFO: Created: latency-svc-dxfq9 May 12 10:02:56.046: INFO: Got endpoints: latency-svc-dxfq9 [2.822243084s] May 12 10:02:56.126: INFO: Created: latency-svc-nqjb7 May 12 10:02:56.178: INFO: Got endpoints: latency-svc-nqjb7 [2.899034847s] May 12 10:02:56.198: INFO: Created: latency-svc-djqrl May 12 10:02:56.239: INFO: Got endpoints: latency-svc-djqrl [2.886894316s] May 12 10:02:56.276: INFO: Created: latency-svc-vt86p May 12 10:02:56.328: INFO: Got endpoints: latency-svc-vt86p [2.922261922s] May 12 10:02:56.340: INFO: Created: latency-svc-lmzfh May 12 10:02:56.364: INFO: Got endpoints: latency-svc-lmzfh [2.915658264s] May 12 10:02:56.400: INFO: Created: latency-svc-x2xjl May 12 10:02:56.414: INFO: Got endpoints: latency-svc-x2xjl [2.836889116s] May 12 10:02:56.479: INFO: Created: latency-svc-gd2g4 May 12 10:02:56.492: INFO: Got endpoints: latency-svc-gd2g4 [2.676879299s] May 12 10:02:56.535: INFO: Created: latency-svc-25b4r May 12 10:02:56.559: INFO: Got endpoints: latency-svc-25b4r [2.492831039s] May 12 10:02:56.634: INFO: Created: latency-svc-z7bvh May 12 10:02:56.643: INFO: Got endpoints: latency-svc-z7bvh [2.304903801s] May 12 10:02:56.670: INFO: Created: latency-svc-r8bhr May 12 10:02:56.679: INFO: Got endpoints: latency-svc-r8bhr [2.192544547s] May 12 10:02:56.712: INFO: Created: latency-svc-s7mss May 12 10:02:56.783: INFO: Got endpoints: latency-svc-s7mss [2.104215739s] May 12 10:02:56.817: INFO: Created: latency-svc-7wngh May 12 10:02:56.830: INFO: Got endpoints: latency-svc-7wngh [1.922964791s] May 12 10:02:56.850: INFO: Created: latency-svc-xcfwb May 12 10:02:56.867: INFO: Got endpoints: latency-svc-xcfwb [1.245379918s] May 12 10:02:56.933: INFO: Created: latency-svc-vcgqj May 12 10:02:56.935: INFO: Got endpoints: latency-svc-vcgqj [1.248997056s] May 12 10:02:56.976: INFO: Created: latency-svc-jzfxw May 12 10:02:56.987: INFO: Got endpoints: latency-svc-jzfxw [1.104904844s] May 12 10:02:57.027: INFO: Created: latency-svc-5bs5z May 12 10:02:57.076: INFO: Got endpoints: latency-svc-5bs5z [1.029601302s] May 12 10:02:57.086: INFO: Created: latency-svc-f9zqb May 12 10:02:57.102: INFO: Got endpoints: latency-svc-f9zqb [923.601484ms] May 12 10:02:57.138: INFO: Created: latency-svc-lc86h May 12 10:02:57.244: INFO: Got endpoints: latency-svc-lc86h [1.005120919s] May 12 10:02:57.294: INFO: Created: latency-svc-hfnp2 May 12 10:02:57.325: INFO: Got endpoints: latency-svc-hfnp2 [996.960365ms] May 12 10:02:57.428: INFO: Created: latency-svc-cp2r4 May 12 10:02:57.445: INFO: Got endpoints: latency-svc-cp2r4 [1.08128046s] May 12 10:02:57.567: INFO: Created: latency-svc-bxdd4 May 12 10:02:57.570: INFO: Got endpoints: latency-svc-bxdd4 [1.156531626s] May 12 10:02:57.667: INFO: Created: latency-svc-qlklh May 12 10:02:57.722: INFO: Got endpoints: latency-svc-qlklh [1.229841714s] May 12 10:02:57.759: INFO: Created: latency-svc-nr7p9 May 12 10:02:57.776: INFO: Got endpoints: latency-svc-nr7p9 [1.216892319s] May 12 10:02:57.898: INFO: Created: latency-svc-7m67j May 12 10:02:57.902: INFO: Got endpoints: latency-svc-7m67j [1.259298421s] May 12 10:02:57.963: INFO: Created: latency-svc-nvgqw May 12 10:02:57.981: INFO: Got endpoints: latency-svc-nvgqw [1.301384553s] May 12 10:02:58.090: INFO: Created: latency-svc-2zk7q May 12 10:02:58.092: INFO: Got endpoints: latency-svc-2zk7q [1.308477986s] May 12 10:02:58.125: INFO: Created: latency-svc-v4fcs May 12 10:02:58.146: INFO: Got endpoints: latency-svc-v4fcs [1.316371383s] May 12 10:02:58.168: INFO: Created: latency-svc-thtlk May 12 10:02:58.179: INFO: Got endpoints: latency-svc-thtlk [1.312403227s] May 12 10:02:58.251: INFO: Created: latency-svc-d7q79 May 12 10:02:58.254: INFO: Got endpoints: latency-svc-d7q79 [1.318320889s] May 12 10:02:58.323: INFO: Created: latency-svc-x28pb May 12 10:02:58.406: INFO: Got endpoints: latency-svc-x28pb [1.418624825s] May 12 10:02:58.416: INFO: Created: latency-svc-t9824 May 12 10:02:58.432: INFO: Got endpoints: latency-svc-t9824 [1.356082189s] May 12 10:02:58.459: INFO: Created: latency-svc-trh2b May 12 10:02:58.473: INFO: Got endpoints: latency-svc-trh2b [1.370405167s] May 12 10:02:58.495: INFO: Created: latency-svc-w98s2 May 12 10:02:58.505: INFO: Got endpoints: latency-svc-w98s2 [1.261401786s] May 12 10:02:58.556: INFO: Created: latency-svc-zjl95 May 12 10:02:58.584: INFO: Got endpoints: latency-svc-zjl95 [1.25858234s] May 12 10:02:58.627: INFO: Created: latency-svc-d58vj May 12 10:02:58.705: INFO: Got endpoints: latency-svc-d58vj [1.260186608s] May 12 10:02:58.723: INFO: Created: latency-svc-s6ftt May 12 10:02:58.784: INFO: Created: latency-svc-s2h2x May 12 10:02:58.853: INFO: Got endpoints: latency-svc-s6ftt [1.283014371s] May 12 10:02:58.861: INFO: Got endpoints: latency-svc-s2h2x [1.139421626s] May 12 10:02:58.890: INFO: Created: latency-svc-f98w5 May 12 10:02:58.909: INFO: Got endpoints: latency-svc-f98w5 [1.133326782s] May 12 10:02:58.938: INFO: Created: latency-svc-96xxt May 12 10:02:58.999: INFO: Got endpoints: latency-svc-96xxt [1.096688092s] May 12 10:02:59.011: INFO: Created: latency-svc-7v4nb May 12 10:02:59.030: INFO: Got endpoints: latency-svc-7v4nb [1.049120731s] May 12 10:02:59.066: INFO: Created: latency-svc-xqwfz May 12 10:02:59.084: INFO: Got endpoints: latency-svc-xqwfz [992.213218ms] May 12 10:02:59.084: INFO: Latencies: [115.000474ms 167.303629ms 284.299757ms 299.925088ms 329.815362ms 404.588737ms 611.492226ms 625.982398ms 796.857416ms 830.946796ms 923.601484ms 947.631321ms 953.629438ms 956.33645ms 959.477446ms 966.16778ms 982.781004ms 989.258575ms 990.835864ms 992.213218ms 992.84626ms 996.0696ms 996.3073ms 996.960365ms 1.000876434s 1.005120919s 1.006661247s 1.016470108s 1.025924848s 1.027649763s 1.02791927s 1.029601302s 1.032243245s 1.042853417s 1.04597951s 1.049120731s 1.050738707s 1.055274765s 1.0576397s 1.06078808s 1.062719668s 1.062893971s 1.065632752s 1.066699753s 1.067264174s 1.06876229s 1.073345242s 1.08128046s 1.091113651s 1.093681795s 1.096688092s 1.098119939s 1.104904844s 1.108792815s 1.111021936s 1.115053452s 1.116863915s 1.119074305s 1.122632716s 1.126169206s 1.127414446s 1.13020597s 1.133326782s 1.139421626s 1.145647384s 1.156531626s 1.156833843s 1.156838948s 1.166095528s 1.180318615s 1.18904755s 1.200217951s 1.208552663s 1.216741698s 1.216892319s 1.228551068s 1.229841714s 1.230583987s 1.241878614s 1.243989383s 1.245379918s 1.248997056s 1.249653744s 1.251639931s 1.252204974s 1.253270182s 1.25405481s 1.255602052s 1.25858234s 1.259298421s 1.260186608s 1.261401786s 1.261848678s 1.265605563s 1.270945819s 1.283014371s 1.283202992s 1.300801245s 1.301384553s 1.302133377s 1.30329516s 1.306504421s 1.308477986s 1.312403227s 1.314820735s 1.316371383s 1.318320889s 1.31908336s 1.324580577s 1.325144367s 1.334392573s 1.336119847s 1.342585762s 1.352687976s 1.356082189s 1.358548859s 1.360440127s 1.362150644s 1.364879249s 1.366765786s 1.368222363s 1.370405167s 1.378731447s 1.391016108s 1.402012028s 1.416421688s 1.417688986s 1.418624825s 1.420957509s 1.444211954s 1.445159413s 1.477075427s 1.498990709s 1.507328678s 1.520544459s 1.534513007s 1.536962958s 1.54874453s 1.558046416s 1.559211153s 1.570106043s 1.598472636s 1.59930365s 1.604893692s 1.607386897s 1.61492005s 1.616828695s 1.618670277s 1.623766614s 1.631569492s 1.643653822s 1.650396784s 1.652466609s 1.658996594s 1.660132493s 1.660330978s 1.665518987s 1.673904238s 1.681011898s 1.691912256s 1.737751818s 1.742440482s 1.743545703s 1.785883627s 1.88851635s 1.922964791s 1.935621044s 2.073172307s 2.104215739s 2.135678856s 2.192544547s 2.304903801s 2.372841212s 2.401951889s 2.444677362s 2.492831039s 2.575891087s 2.640731945s 2.64346193s 2.654785995s 2.676879299s 2.735653912s 2.822243084s 2.836889116s 2.886894316s 2.899034847s 2.915658264s 2.922261922s 3.00771334s 3.031392445s 3.164082731s 3.264731807s 3.281300272s 3.408564873s 3.427108088s 3.432104026s 3.490133494s 3.564018152s 3.644888293s 3.655051928s] May 12 10:02:59.084: INFO: 50 %ile: 1.30329516s May 12 10:02:59.084: INFO: 90 %ile: 2.676879299s May 12 10:02:59.084: INFO: 99 %ile: 3.644888293s May 12 10:02:59.084: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:02:59.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-qsqwn" for this suite. May 12 10:03:37.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:03:37.216: INFO: namespace: e2e-tests-svc-latency-qsqwn, resource: bindings, ignored listing per whitelist May 12 10:03:37.239: INFO: namespace e2e-tests-svc-latency-qsqwn deletion completed in 38.14807809s • [SLOW TEST:64.728 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:03:37.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 12 10:03:37.527: INFO: Waiting up to 5m0s for pod "client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-containers-qj4qw" to be "success or failure" May 12 10:03:37.549: INFO: Pod "client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.811109ms May 12 10:03:39.695: INFO: Pod "client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168064204s May 12 10:03:41.698: INFO: Pod "client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171198358s May 12 10:03:43.701: INFO: Pod "client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174271567s STEP: Saw pod success May 12 10:03:43.701: INFO: Pod "client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:03:43.703: INFO: Trying to get logs from node hunter-worker2 pod client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:03:44.002: INFO: Waiting for pod client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c to disappear May 12 10:03:44.149: INFO: Pod client-containers-d91fdd18-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:03:44.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qj4qw" for this suite. May 12 10:03:50.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:03:50.197: INFO: namespace: e2e-tests-containers-qj4qw, resource: bindings, ignored listing per whitelist May 12 10:03:50.211: INFO: namespace e2e-tests-containers-qj4qw deletion completed in 6.059392657s • [SLOW TEST:12.971 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:03:50.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 10:03:50.397: INFO: Waiting up to 5m0s for pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-gj2dd" to be "success or failure" May 12 10:03:50.408: INFO: Pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.595202ms May 12 10:03:52.413: INFO: Pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01571204s May 12 10:03:54.417: INFO: Pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019702676s May 12 10:03:56.455: INFO: Pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.057305222s May 12 10:03:58.459: INFO: Pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061825301s STEP: Saw pod success May 12 10:03:58.459: INFO: Pod "pod-e0c194ac-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:03:58.462: INFO: Trying to get logs from node hunter-worker pod pod-e0c194ac-9437-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:03:58.546: INFO: Waiting for pod pod-e0c194ac-9437-11ea-92b2-0242ac11001c to disappear May 12 10:03:58.570: INFO: Pod pod-e0c194ac-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:03:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gj2dd" for this suite. May 12 10:04:06.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:06.741: INFO: namespace: e2e-tests-emptydir-gj2dd, resource: bindings, ignored listing per whitelist May 12 10:04:06.743: INFO: namespace e2e-tests-emptydir-gj2dd deletion completed in 8.169098892s • [SLOW TEST:16.532 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:04:06.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ea9efe51-9437-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 10:04:07.003: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-gn5kn" to be "success or failure" May 12 10:04:07.023: INFO: Pod "pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.151012ms May 12 10:04:09.026: INFO: Pod "pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022650573s May 12 10:04:11.148: INFO: Pod "pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144707915s STEP: Saw pod success May 12 10:04:11.148: INFO: Pod "pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:04:11.151: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 10:04:11.222: INFO: Waiting for pod pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c to disappear May 12 10:04:11.226: INFO: Pod pod-projected-configmaps-eaa4733e-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:04:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gn5kn" for this suite. May 12 10:04:17.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:17.316: INFO: namespace: e2e-tests-projected-gn5kn, resource: bindings, ignored listing per whitelist May 12 10:04:17.338: INFO: namespace e2e-tests-projected-gn5kn deletion completed in 6.108393803s • [SLOW TEST:10.595 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:04:17.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-f0ec8c05-9437-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:04:17.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-4hwfg" to be "success or failure" May 12 10:04:17.496: INFO: Pod "pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.420968ms May 12 10:04:19.502: INFO: Pod "pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030533785s May 12 10:04:21.505: INFO: Pod "pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.034425451s May 12 10:04:23.510: INFO: Pod "pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038709827s STEP: Saw pod success May 12 10:04:23.510: INFO: Pod "pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:04:23.513: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 10:04:23.556: INFO: Waiting for pod pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c to disappear May 12 10:04:23.564: INFO: Pod pod-projected-secrets-f0f13321-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:04:23.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4hwfg" for this suite. May 12 10:04:29.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:29.624: INFO: namespace: e2e-tests-projected-4hwfg, resource: bindings, ignored listing per whitelist May 12 10:04:29.724: INFO: namespace e2e-tests-projected-4hwfg deletion completed in 6.156087023s • [SLOW TEST:12.386 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:04:29.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:04:29.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-h67cz" to be "success or failure" May 12 10:04:30.169: INFO: Pod "downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 326.345903ms May 12 10:04:32.184: INFO: Pod "downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34128536s May 12 10:04:34.188: INFO: Pod "downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.345281063s STEP: Saw pod success May 12 10:04:34.188: INFO: Pod "downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:04:34.190: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:04:34.367: INFO: Waiting for pod downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c to disappear May 12 10:04:34.371: INFO: Pod downwardapi-volume-f84dec41-9437-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:04:34.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h67cz" for this suite. May 12 10:04:40.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:40.483: INFO: namespace: e2e-tests-projected-h67cz, resource: bindings, ignored listing per whitelist May 12 10:04:40.533: INFO: namespace e2e-tests-projected-h67cz deletion completed in 6.157839786s • [SLOW TEST:10.808 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:04:40.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 12 10:04:55.139: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.139: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.167588 6 log.go:172] (0xc0009a6840) (0xc001c52f00) Create stream I0512 10:04:55.167612 6 log.go:172] (0xc0009a6840) (0xc001c52f00) Stream added, broadcasting: 1 I0512 10:04:55.169271 6 log.go:172] (0xc0009a6840) Reply frame received for 1 I0512 10:04:55.169286 6 log.go:172] (0xc0009a6840) (0xc000a5a500) Create stream I0512 10:04:55.169295 6 log.go:172] (0xc0009a6840) (0xc000a5a500) Stream added, broadcasting: 3 I0512 10:04:55.170051 6 log.go:172] (0xc0009a6840) Reply frame received for 3 I0512 10:04:55.170084 6 log.go:172] (0xc0009a6840) (0xc001c52fa0) Create stream I0512 10:04:55.170094 6 log.go:172] (0xc0009a6840) (0xc001c52fa0) Stream added, broadcasting: 5 I0512 10:04:55.170806 6 log.go:172] (0xc0009a6840) Reply frame received for 5 I0512 10:04:55.236173 6 log.go:172] (0xc0009a6840) Data frame received for 5 I0512 10:04:55.236207 6 log.go:172] (0xc001c52fa0) (5) Data frame handling I0512 10:04:55.236278 6 log.go:172] (0xc0009a6840) Data frame received for 3 I0512 10:04:55.236318 6 log.go:172] (0xc000a5a500) (3) Data frame handling I0512 10:04:55.236336 6 log.go:172] (0xc000a5a500) (3) Data frame sent I0512 10:04:55.236353 6 log.go:172] (0xc0009a6840) Data frame received for 3 I0512 10:04:55.236364 6 log.go:172] (0xc000a5a500) (3) Data frame handling I0512 10:04:55.237554 6 log.go:172] (0xc0009a6840) Data frame received for 1 I0512 10:04:55.237579 6 log.go:172] (0xc001c52f00) (1) Data frame handling I0512 10:04:55.237603 6 log.go:172] (0xc001c52f00) (1) Data frame sent I0512 10:04:55.237624 6 log.go:172] (0xc0009a6840) (0xc001c52f00) Stream removed, broadcasting: 1 I0512 10:04:55.237646 6 log.go:172] (0xc0009a6840) Go away received I0512 10:04:55.237801 6 log.go:172] (0xc0009a6840) (0xc001c52f00) Stream removed, broadcasting: 1 I0512 10:04:55.237835 6 log.go:172] (0xc0009a6840) (0xc000a5a500) Stream removed, broadcasting: 3 I0512 10:04:55.237858 6 log.go:172] (0xc0009a6840) (0xc001c52fa0) Stream removed, broadcasting: 5 May 12 10:04:55.237: INFO: Exec stderr: "" May 12 10:04:55.237: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.237: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.272221 6 log.go:172] (0xc0000eaf20) (0xc0018fe820) Create stream I0512 10:04:55.272251 6 log.go:172] (0xc0000eaf20) (0xc0018fe820) Stream added, broadcasting: 1 I0512 10:04:55.274194 6 log.go:172] (0xc0000eaf20) Reply frame received for 1 I0512 10:04:55.274225 6 log.go:172] (0xc0000eaf20) (0xc001c53040) Create stream I0512 10:04:55.274235 6 log.go:172] (0xc0000eaf20) (0xc001c53040) Stream added, broadcasting: 3 I0512 10:04:55.275354 6 log.go:172] (0xc0000eaf20) Reply frame received for 3 I0512 10:04:55.275379 6 log.go:172] (0xc0000eaf20) (0xc001c530e0) Create stream I0512 10:04:55.275388 6 log.go:172] (0xc0000eaf20) (0xc001c530e0) Stream added, broadcasting: 5 I0512 10:04:55.276017 6 log.go:172] (0xc0000eaf20) Reply frame received for 5 I0512 10:04:55.319240 6 log.go:172] (0xc0000eaf20) Data frame received for 5 I0512 10:04:55.319260 6 log.go:172] (0xc001c530e0) (5) Data frame handling I0512 10:04:55.319281 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0512 10:04:55.319288 6 log.go:172] (0xc001c53040) (3) Data frame handling I0512 10:04:55.319295 6 log.go:172] (0xc001c53040) (3) Data frame sent I0512 10:04:55.319303 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0512 10:04:55.319307 6 log.go:172] (0xc001c53040) (3) Data frame handling I0512 10:04:55.320076 6 log.go:172] (0xc0000eaf20) Data frame received for 1 I0512 10:04:55.320099 6 log.go:172] (0xc0018fe820) (1) Data frame handling I0512 10:04:55.320115 6 log.go:172] (0xc0018fe820) (1) Data frame sent I0512 10:04:55.320128 6 log.go:172] (0xc0000eaf20) (0xc0018fe820) Stream removed, broadcasting: 1 I0512 10:04:55.320163 6 log.go:172] (0xc0000eaf20) Go away received I0512 10:04:55.320247 6 log.go:172] (0xc0000eaf20) (0xc0018fe820) Stream removed, broadcasting: 1 I0512 10:04:55.320265 6 log.go:172] (0xc0000eaf20) (0xc001c53040) Stream removed, broadcasting: 3 I0512 10:04:55.320271 6 log.go:172] (0xc0000eaf20) (0xc001c530e0) Stream removed, broadcasting: 5 May 12 10:04:55.320: INFO: Exec stderr: "" May 12 10:04:55.320: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.320: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.341500 6 log.go:172] (0xc000e642c0) (0xc000a5a820) Create stream I0512 10:04:55.341522 6 log.go:172] (0xc000e642c0) (0xc000a5a820) Stream added, broadcasting: 1 I0512 10:04:55.344380 6 log.go:172] (0xc000e642c0) Reply frame received for 1 I0512 10:04:55.344438 6 log.go:172] (0xc000e642c0) (0xc0018fe8c0) Create stream I0512 10:04:55.344461 6 log.go:172] (0xc000e642c0) (0xc0018fe8c0) Stream added, broadcasting: 3 I0512 10:04:55.345514 6 log.go:172] (0xc000e642c0) Reply frame received for 3 I0512 10:04:55.345567 6 log.go:172] (0xc000e642c0) (0xc0018fe960) Create stream I0512 10:04:55.345585 6 log.go:172] (0xc000e642c0) (0xc0018fe960) Stream added, broadcasting: 5 I0512 10:04:55.346677 6 log.go:172] (0xc000e642c0) Reply frame received for 5 I0512 10:04:55.394203 6 log.go:172] (0xc000e642c0) Data frame received for 3 I0512 10:04:55.394232 6 log.go:172] (0xc0018fe8c0) (3) Data frame handling I0512 10:04:55.394248 6 log.go:172] (0xc0018fe8c0) (3) Data frame sent I0512 10:04:55.394259 6 log.go:172] (0xc000e642c0) Data frame received for 3 I0512 10:04:55.394271 6 log.go:172] (0xc0018fe8c0) (3) Data frame handling I0512 10:04:55.394285 6 log.go:172] (0xc000e642c0) Data frame received for 5 I0512 10:04:55.394296 6 log.go:172] (0xc0018fe960) (5) Data frame handling I0512 10:04:55.395657 6 log.go:172] (0xc000e642c0) Data frame received for 1 I0512 10:04:55.395679 6 log.go:172] (0xc000a5a820) (1) Data frame handling I0512 10:04:55.395697 6 log.go:172] (0xc000a5a820) (1) Data frame sent I0512 10:04:55.395712 6 log.go:172] (0xc000e642c0) (0xc000a5a820) Stream removed, broadcasting: 1 I0512 10:04:55.395729 6 log.go:172] (0xc000e642c0) Go away received I0512 10:04:55.395811 6 log.go:172] (0xc000e642c0) (0xc000a5a820) Stream removed, broadcasting: 1 I0512 10:04:55.395830 6 log.go:172] (0xc000e642c0) (0xc0018fe8c0) Stream removed, broadcasting: 3 I0512 10:04:55.395854 6 log.go:172] (0xc000e642c0) (0xc0018fe960) Stream removed, broadcasting: 5 May 12 10:04:55.395: INFO: Exec stderr: "" May 12 10:04:55.395: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.395: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.421786 6 log.go:172] (0xc001af44d0) (0xc001cb3ae0) Create stream I0512 10:04:55.421806 6 log.go:172] (0xc001af44d0) (0xc001cb3ae0) Stream added, broadcasting: 1 I0512 10:04:55.423440 6 log.go:172] (0xc001af44d0) Reply frame received for 1 I0512 10:04:55.423471 6 log.go:172] (0xc001af44d0) (0xc000a5a8c0) Create stream I0512 10:04:55.423481 6 log.go:172] (0xc001af44d0) (0xc000a5a8c0) Stream added, broadcasting: 3 I0512 10:04:55.424201 6 log.go:172] (0xc001af44d0) Reply frame received for 3 I0512 10:04:55.424221 6 log.go:172] (0xc001af44d0) (0xc001c53180) Create stream I0512 10:04:55.424227 6 log.go:172] (0xc001af44d0) (0xc001c53180) Stream added, broadcasting: 5 I0512 10:04:55.424868 6 log.go:172] (0xc001af44d0) Reply frame received for 5 I0512 10:04:55.494198 6 log.go:172] (0xc001af44d0) Data frame received for 5 I0512 10:04:55.494254 6 log.go:172] (0xc001af44d0) Data frame received for 3 I0512 10:04:55.494302 6 log.go:172] (0xc000a5a8c0) (3) Data frame handling I0512 10:04:55.494334 6 log.go:172] (0xc000a5a8c0) (3) Data frame sent I0512 10:04:55.494361 6 log.go:172] (0xc001af44d0) Data frame received for 3 I0512 10:04:55.494372 6 log.go:172] (0xc000a5a8c0) (3) Data frame handling I0512 10:04:55.494390 6 log.go:172] (0xc001c53180) (5) Data frame handling I0512 10:04:55.495210 6 log.go:172] (0xc001af44d0) Data frame received for 1 I0512 10:04:55.495221 6 log.go:172] (0xc001cb3ae0) (1) Data frame handling I0512 10:04:55.495226 6 log.go:172] (0xc001cb3ae0) (1) Data frame sent I0512 10:04:55.495232 6 log.go:172] (0xc001af44d0) (0xc001cb3ae0) Stream removed, broadcasting: 1 I0512 10:04:55.495262 6 log.go:172] (0xc001af44d0) Go away received I0512 10:04:55.495283 6 log.go:172] (0xc001af44d0) (0xc001cb3ae0) Stream removed, broadcasting: 1 I0512 10:04:55.495331 6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc000a5a8c0), 0x5:(*spdystream.Stream)(0xc001c53180)} I0512 10:04:55.495354 6 log.go:172] (0xc001af44d0) (0xc000a5a8c0) Stream removed, broadcasting: 3 I0512 10:04:55.495379 6 log.go:172] (0xc001af44d0) (0xc001c53180) Stream removed, broadcasting: 5 May 12 10:04:55.495: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 12 10:04:55.495: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.495: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.525786 6 log.go:172] (0xc001af49a0) (0xc001cb3d60) Create stream I0512 10:04:55.525805 6 log.go:172] (0xc001af49a0) (0xc001cb3d60) Stream added, broadcasting: 1 I0512 10:04:55.528226 6 log.go:172] (0xc001af49a0) Reply frame received for 1 I0512 10:04:55.528276 6 log.go:172] (0xc001af49a0) (0xc001c78000) Create stream I0512 10:04:55.528291 6 log.go:172] (0xc001af49a0) (0xc001c78000) Stream added, broadcasting: 3 I0512 10:04:55.529102 6 log.go:172] (0xc001af49a0) Reply frame received for 3 I0512 10:04:55.529443 6 log.go:172] (0xc001af49a0) (0xc001c53220) Create stream I0512 10:04:55.529458 6 log.go:172] (0xc001af49a0) (0xc001c53220) Stream added, broadcasting: 5 I0512 10:04:55.530291 6 log.go:172] (0xc001af49a0) Reply frame received for 5 I0512 10:04:55.594671 6 log.go:172] (0xc001af49a0) Data frame received for 3 I0512 10:04:55.594708 6 log.go:172] (0xc001c78000) (3) Data frame handling I0512 10:04:55.594732 6 log.go:172] (0xc001c78000) (3) Data frame sent I0512 10:04:55.594751 6 log.go:172] (0xc001af49a0) Data frame received for 3 I0512 10:04:55.594762 6 log.go:172] (0xc001c78000) (3) Data frame handling I0512 10:04:55.594791 6 log.go:172] (0xc001af49a0) Data frame received for 5 I0512 10:04:55.594810 6 log.go:172] (0xc001c53220) (5) Data frame handling I0512 10:04:55.595412 6 log.go:172] (0xc001af49a0) Data frame received for 1 I0512 10:04:55.595439 6 log.go:172] (0xc001cb3d60) (1) Data frame handling I0512 10:04:55.595483 6 log.go:172] (0xc001cb3d60) (1) Data frame sent I0512 10:04:55.595509 6 log.go:172] (0xc001af49a0) (0xc001cb3d60) Stream removed, broadcasting: 1 I0512 10:04:55.595606 6 log.go:172] (0xc001af49a0) (0xc001cb3d60) Stream removed, broadcasting: 1 I0512 10:04:55.595622 6 log.go:172] (0xc001af49a0) (0xc001c78000) Stream removed, broadcasting: 3 I0512 10:04:55.595777 6 log.go:172] (0xc001af49a0) (0xc001c53220) Stream removed, broadcasting: 5 May 12 10:04:55.595: INFO: Exec stderr: "" May 12 10:04:55.595: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.595: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.623684 6 log.go:172] (0xc00163e2c0) (0xc001c78460) Create stream I0512 10:04:55.623734 6 log.go:172] (0xc00163e2c0) (0xc001c78460) Stream added, broadcasting: 1 I0512 10:04:55.626203 6 log.go:172] (0xc00163e2c0) Reply frame received for 1 I0512 10:04:55.626248 6 log.go:172] (0xc00163e2c0) (0xc001cb3ea0) Create stream I0512 10:04:55.626272 6 log.go:172] (0xc00163e2c0) (0xc001cb3ea0) Stream added, broadcasting: 3 I0512 10:04:55.627201 6 log.go:172] (0xc00163e2c0) Reply frame received for 3 I0512 10:04:55.627228 6 log.go:172] (0xc00163e2c0) (0xc0018fea00) Create stream I0512 10:04:55.627238 6 log.go:172] (0xc00163e2c0) (0xc0018fea00) Stream added, broadcasting: 5 I0512 10:04:55.627906 6 log.go:172] (0xc00163e2c0) Reply frame received for 5 I0512 10:04:55.692080 6 log.go:172] (0xc00163e2c0) Data frame received for 5 I0512 10:04:55.692097 6 log.go:172] (0xc0018fea00) (5) Data frame handling I0512 10:04:55.692114 6 log.go:172] (0xc00163e2c0) Data frame received for 3 I0512 10:04:55.692132 6 log.go:172] (0xc001cb3ea0) (3) Data frame handling I0512 10:04:55.692151 6 log.go:172] (0xc001cb3ea0) (3) Data frame sent I0512 10:04:55.692162 6 log.go:172] (0xc00163e2c0) Data frame received for 3 I0512 10:04:55.692173 6 log.go:172] (0xc001cb3ea0) (3) Data frame handling I0512 10:04:55.693046 6 log.go:172] (0xc00163e2c0) Data frame received for 1 I0512 10:04:55.693062 6 log.go:172] (0xc001c78460) (1) Data frame handling I0512 10:04:55.693074 6 log.go:172] (0xc001c78460) (1) Data frame sent I0512 10:04:55.693083 6 log.go:172] (0xc00163e2c0) (0xc001c78460) Stream removed, broadcasting: 1 I0512 10:04:55.693091 6 log.go:172] (0xc00163e2c0) Go away received I0512 10:04:55.693287 6 log.go:172] (0xc00163e2c0) (0xc001c78460) Stream removed, broadcasting: 1 I0512 10:04:55.693305 6 log.go:172] (0xc00163e2c0) (0xc001cb3ea0) Stream removed, broadcasting: 3 I0512 10:04:55.693321 6 log.go:172] (0xc00163e2c0) (0xc0018fea00) Stream removed, broadcasting: 5 May 12 10:04:55.693: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 12 10:04:55.693: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.693: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.718440 6 log.go:172] (0xc000e64790) (0xc000a5abe0) Create stream I0512 10:04:55.718478 6 log.go:172] (0xc000e64790) (0xc000a5abe0) Stream added, broadcasting: 1 I0512 10:04:55.721622 6 log.go:172] (0xc000e64790) Reply frame received for 1 I0512 10:04:55.721662 6 log.go:172] (0xc000e64790) (0xc001c78500) Create stream I0512 10:04:55.721674 6 log.go:172] (0xc000e64790) (0xc001c78500) Stream added, broadcasting: 3 I0512 10:04:55.722335 6 log.go:172] (0xc000e64790) Reply frame received for 3 I0512 10:04:55.722370 6 log.go:172] (0xc000e64790) (0xc001c785a0) Create stream I0512 10:04:55.722386 6 log.go:172] (0xc000e64790) (0xc001c785a0) Stream added, broadcasting: 5 I0512 10:04:55.723153 6 log.go:172] (0xc000e64790) Reply frame received for 5 I0512 10:04:55.784560 6 log.go:172] (0xc000e64790) Data frame received for 3 I0512 10:04:55.784603 6 log.go:172] (0xc001c78500) (3) Data frame handling I0512 10:04:55.784632 6 log.go:172] (0xc001c78500) (3) Data frame sent I0512 10:04:55.784647 6 log.go:172] (0xc000e64790) Data frame received for 3 I0512 10:04:55.784659 6 log.go:172] (0xc001c78500) (3) Data frame handling I0512 10:04:55.784728 6 log.go:172] (0xc000e64790) Data frame received for 5 I0512 10:04:55.784755 6 log.go:172] (0xc001c785a0) (5) Data frame handling I0512 10:04:55.786293 6 log.go:172] (0xc000e64790) Data frame received for 1 I0512 10:04:55.786308 6 log.go:172] (0xc000a5abe0) (1) Data frame handling I0512 10:04:55.786317 6 log.go:172] (0xc000a5abe0) (1) Data frame sent I0512 10:04:55.786456 6 log.go:172] (0xc000e64790) (0xc000a5abe0) Stream removed, broadcasting: 1 I0512 10:04:55.786496 6 log.go:172] (0xc000e64790) Go away received I0512 10:04:55.786564 6 log.go:172] (0xc000e64790) (0xc000a5abe0) Stream removed, broadcasting: 1 I0512 10:04:55.786606 6 log.go:172] (0xc000e64790) (0xc001c78500) Stream removed, broadcasting: 3 I0512 10:04:55.786646 6 log.go:172] (0xc000e64790) (0xc001c785a0) Stream removed, broadcasting: 5 May 12 10:04:55.786: INFO: Exec stderr: "" May 12 10:04:55.786: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.786: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.818822 6 log.go:172] (0xc0009a6d10) (0xc001c534a0) Create stream I0512 10:04:55.818961 6 log.go:172] (0xc0009a6d10) (0xc001c534a0) Stream added, broadcasting: 1 I0512 10:04:55.821670 6 log.go:172] (0xc0009a6d10) Reply frame received for 1 I0512 10:04:55.821703 6 log.go:172] (0xc0009a6d10) (0xc001c78640) Create stream I0512 10:04:55.821714 6 log.go:172] (0xc0009a6d10) (0xc001c78640) Stream added, broadcasting: 3 I0512 10:04:55.822682 6 log.go:172] (0xc0009a6d10) Reply frame received for 3 I0512 10:04:55.822721 6 log.go:172] (0xc0009a6d10) (0xc001c53540) Create stream I0512 10:04:55.822735 6 log.go:172] (0xc0009a6d10) (0xc001c53540) Stream added, broadcasting: 5 I0512 10:04:55.823395 6 log.go:172] (0xc0009a6d10) Reply frame received for 5 I0512 10:04:55.869772 6 log.go:172] (0xc0009a6d10) Data frame received for 5 I0512 10:04:55.869812 6 log.go:172] (0xc001c53540) (5) Data frame handling I0512 10:04:55.869844 6 log.go:172] (0xc0009a6d10) Data frame received for 3 I0512 10:04:55.869861 6 log.go:172] (0xc001c78640) (3) Data frame handling I0512 10:04:55.869887 6 log.go:172] (0xc001c78640) (3) Data frame sent I0512 10:04:55.869900 6 log.go:172] (0xc0009a6d10) Data frame received for 3 I0512 10:04:55.869917 6 log.go:172] (0xc001c78640) (3) Data frame handling I0512 10:04:55.870533 6 log.go:172] (0xc0009a6d10) Data frame received for 1 I0512 10:04:55.870557 6 log.go:172] (0xc001c534a0) (1) Data frame handling I0512 10:04:55.870570 6 log.go:172] (0xc001c534a0) (1) Data frame sent I0512 10:04:55.870601 6 log.go:172] (0xc0009a6d10) (0xc001c534a0) Stream removed, broadcasting: 1 I0512 10:04:55.870634 6 log.go:172] (0xc0009a6d10) Go away received I0512 10:04:55.870703 6 log.go:172] (0xc0009a6d10) (0xc001c534a0) Stream removed, broadcasting: 1 I0512 10:04:55.870731 6 log.go:172] (0xc0009a6d10) (0xc001c78640) Stream removed, broadcasting: 3 I0512 10:04:55.870750 6 log.go:172] (0xc0009a6d10) (0xc001c53540) Stream removed, broadcasting: 5 May 12 10:04:55.870: INFO: Exec stderr: "" May 12 10:04:55.870: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.870: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.909465 6 log.go:172] (0xc0009a71e0) (0xc001c537c0) Create stream I0512 10:04:55.909483 6 log.go:172] (0xc0009a71e0) (0xc001c537c0) Stream added, broadcasting: 1 I0512 10:04:55.910486 6 log.go:172] (0xc0009a71e0) Reply frame received for 1 I0512 10:04:55.910508 6 log.go:172] (0xc0009a71e0) (0xc001c78780) Create stream I0512 10:04:55.910514 6 log.go:172] (0xc0009a71e0) (0xc001c78780) Stream added, broadcasting: 3 I0512 10:04:55.910896 6 log.go:172] (0xc0009a71e0) Reply frame received for 3 I0512 10:04:55.910911 6 log.go:172] (0xc0009a71e0) (0xc001cb3f40) Create stream I0512 10:04:55.910917 6 log.go:172] (0xc0009a71e0) (0xc001cb3f40) Stream added, broadcasting: 5 I0512 10:04:55.911265 6 log.go:172] (0xc0009a71e0) Reply frame received for 5 I0512 10:04:55.957748 6 log.go:172] (0xc0009a71e0) Data frame received for 5 I0512 10:04:55.957787 6 log.go:172] (0xc001cb3f40) (5) Data frame handling I0512 10:04:55.957946 6 log.go:172] (0xc0009a71e0) Data frame received for 3 I0512 10:04:55.957969 6 log.go:172] (0xc001c78780) (3) Data frame handling I0512 10:04:55.957982 6 log.go:172] (0xc001c78780) (3) Data frame sent I0512 10:04:55.957994 6 log.go:172] (0xc0009a71e0) Data frame received for 3 I0512 10:04:55.958004 6 log.go:172] (0xc001c78780) (3) Data frame handling I0512 10:04:55.959035 6 log.go:172] (0xc0009a71e0) Data frame received for 1 I0512 10:04:55.959057 6 log.go:172] (0xc001c537c0) (1) Data frame handling I0512 10:04:55.959068 6 log.go:172] (0xc001c537c0) (1) Data frame sent I0512 10:04:55.959083 6 log.go:172] (0xc0009a71e0) (0xc001c537c0) Stream removed, broadcasting: 1 I0512 10:04:55.959098 6 log.go:172] (0xc0009a71e0) Go away received I0512 10:04:55.959189 6 log.go:172] (0xc0009a71e0) (0xc001c537c0) Stream removed, broadcasting: 1 I0512 10:04:55.959206 6 log.go:172] (0xc0009a71e0) (0xc001c78780) Stream removed, broadcasting: 3 I0512 10:04:55.959216 6 log.go:172] (0xc0009a71e0) (0xc001cb3f40) Stream removed, broadcasting: 5 May 12 10:04:55.959: INFO: Exec stderr: "" May 12 10:04:55.959: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6rr4c PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:04:55.959: INFO: >>> kubeConfig: /root/.kube/config I0512 10:04:55.991195 6 log.go:172] (0xc00163e790) (0xc001c78a00) Create stream I0512 10:04:55.991229 6 log.go:172] (0xc00163e790) (0xc001c78a00) Stream added, broadcasting: 1 I0512 10:04:55.993571 6 log.go:172] (0xc00163e790) Reply frame received for 1 I0512 10:04:55.993600 6 log.go:172] (0xc00163e790) (0xc0018feaa0) Create stream I0512 10:04:55.993606 6 log.go:172] (0xc00163e790) (0xc0018feaa0) Stream added, broadcasting: 3 I0512 10:04:55.994371 6 log.go:172] (0xc00163e790) Reply frame received for 3 I0512 10:04:55.994405 6 log.go:172] (0xc00163e790) (0xc0014d2000) Create stream I0512 10:04:55.994422 6 log.go:172] (0xc00163e790) (0xc0014d2000) Stream added, broadcasting: 5 I0512 10:04:55.995132 6 log.go:172] (0xc00163e790) Reply frame received for 5 I0512 10:04:56.071429 6 log.go:172] (0xc00163e790) Data frame received for 3 I0512 10:04:56.071461 6 log.go:172] (0xc0018feaa0) (3) Data frame handling I0512 10:04:56.071470 6 log.go:172] (0xc0018feaa0) (3) Data frame sent I0512 10:04:56.071478 6 log.go:172] (0xc00163e790) Data frame received for 3 I0512 10:04:56.071485 6 log.go:172] (0xc0018feaa0) (3) Data frame handling I0512 10:04:56.071499 6 log.go:172] (0xc00163e790) Data frame received for 5 I0512 10:04:56.071505 6 log.go:172] (0xc0014d2000) (5) Data frame handling I0512 10:04:56.072654 6 log.go:172] (0xc00163e790) Data frame received for 1 I0512 10:04:56.072662 6 log.go:172] (0xc001c78a00) (1) Data frame handling I0512 10:04:56.072668 6 log.go:172] (0xc001c78a00) (1) Data frame sent I0512 10:04:56.072674 6 log.go:172] (0xc00163e790) (0xc001c78a00) Stream removed, broadcasting: 1 I0512 10:04:56.072716 6 log.go:172] (0xc00163e790) (0xc001c78a00) Stream removed, broadcasting: 1 I0512 10:04:56.072723 6 log.go:172] (0xc00163e790) (0xc0018feaa0) Stream removed, broadcasting: 3 I0512 10:04:56.072821 6 log.go:172] (0xc00163e790) (0xc0014d2000) Stream removed, broadcasting: 5 I0512 10:04:56.072886 6 log.go:172] (0xc00163e790) Go away received May 12 10:04:56.072: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:04:56.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-6rr4c" for this suite. May 12 10:05:44.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:05:44.138: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-6rr4c, resource: bindings, ignored listing per whitelist May 12 10:05:44.155: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-6rr4c deletion completed in 48.078358986s • [SLOW TEST:63.621 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:05:44.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 12 10:05:44.338: INFO: namespace e2e-tests-kubectl-shj2k May 12 10:05:44.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shj2k' May 12 10:05:48.237: INFO: stderr: "" May 12 10:05:48.238: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 10:05:49.241: INFO: Selector matched 1 pods for map[app:redis] May 12 10:05:49.241: INFO: Found 0 / 1 May 12 10:05:50.252: INFO: Selector matched 1 pods for map[app:redis] May 12 10:05:50.252: INFO: Found 0 / 1 May 12 10:05:51.284: INFO: Selector matched 1 pods for map[app:redis] May 12 10:05:51.284: INFO: Found 0 / 1 May 12 10:05:52.242: INFO: Selector matched 1 pods for map[app:redis] May 12 10:05:52.243: INFO: Found 0 / 1 May 12 10:05:53.253: INFO: Selector matched 1 pods for map[app:redis] May 12 10:05:53.253: INFO: Found 1 / 1 May 12 10:05:53.253: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 10:05:53.256: INFO: Selector matched 1 pods for map[app:redis] May 12 10:05:53.256: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 10:05:53.256: INFO: wait on redis-master startup in e2e-tests-kubectl-shj2k May 12 10:05:53.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ftxwf redis-master --namespace=e2e-tests-kubectl-shj2k' May 12 10:05:53.368: INFO: stderr: "" May 12 10:05:53.368: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 10:05:52.486 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 10:05:52.486 # Server started, Redis version 3.2.12\n1:M 12 May 10:05:52.486 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 10:05:52.486 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 12 10:05:53.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-shj2k' May 12 10:05:53.541: INFO: stderr: "" May 12 10:05:53.541: INFO: stdout: "service/rm2 exposed\n" May 12 10:05:53.624: INFO: Service rm2 in namespace e2e-tests-kubectl-shj2k found. STEP: exposing service May 12 10:05:55.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-shj2k' May 12 10:05:55.884: INFO: stderr: "" May 12 10:05:55.884: INFO: stdout: "service/rm3 exposed\n" May 12 10:05:55.953: INFO: Service rm3 in namespace e2e-tests-kubectl-shj2k found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:05:57.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-shj2k" for this suite. May 12 10:06:19.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:06:20.051: INFO: namespace: e2e-tests-kubectl-shj2k, resource: bindings, ignored listing per whitelist May 12 10:06:20.053: INFO: namespace e2e-tests-kubectl-shj2k deletion completed in 22.092677713s • [SLOW TEST:35.899 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:06:20.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 12 10:06:20.150: INFO: Waiting up to 5m0s for pod "var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c" in namespace "e2e-tests-var-expansion-hhndk" to be "success or failure" May 12 10:06:20.156: INFO: Pod "var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.456456ms May 12 10:06:22.211: INFO: Pod "var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061226156s May 12 10:06:24.216: INFO: Pod "var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.065531914s May 12 10:06:26.220: INFO: Pod "var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0698273s STEP: Saw pod success May 12 10:06:26.220: INFO: Pod "var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:06:26.223: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 10:06:26.303: INFO: Waiting for pod var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c to disappear May 12 10:06:26.310: INFO: Pod var-expansion-3a0ee502-9438-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:06:26.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hhndk" for this suite. May 12 10:06:32.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:06:32.382: INFO: namespace: e2e-tests-var-expansion-hhndk, resource: bindings, ignored listing per whitelist May 12 10:06:32.396: INFO: namespace e2e-tests-var-expansion-hhndk deletion completed in 6.08281407s • [SLOW TEST:12.342 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:06:32.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:07:14.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-5wkq2" for this suite. May 12 10:07:20.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:07:20.885: INFO: namespace: e2e-tests-container-runtime-5wkq2, resource: bindings, ignored listing per whitelist May 12 10:07:20.990: INFO: namespace e2e-tests-container-runtime-5wkq2 deletion completed in 6.246295077s • [SLOW TEST:48.594 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:07:20.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:07:21.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ql5gl' May 12 10:07:21.299: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 10:07:21.299: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 12 10:07:23.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ql5gl' May 12 10:07:23.571: INFO: stderr: "" May 12 10:07:23.572: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:07:23.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ql5gl" for this suite. May 12 10:07:47.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:07:47.856: INFO: namespace: e2e-tests-kubectl-ql5gl, resource: bindings, ignored listing per whitelist May 12 10:07:47.883: INFO: namespace e2e-tests-kubectl-ql5gl deletion completed in 24.283378116s • [SLOW TEST:26.893 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:07:47.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 10:07:54.258: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6e6c6631-9438-11ea-92b2-0242ac11001c,GenerateName:,Namespace:e2e-tests-events-6cw5w,SelfLink:/api/v1/namespaces/e2e-tests-events-6cw5w/pods/send-events-6e6c6631-9438-11ea-92b2-0242ac11001c,UID:6e709769-9438-11ea-99e8-0242ac110002,ResourceVersion:10142154,Generation:0,CreationTimestamp:2020-05-12 10:07:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 989871194,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mmk6g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mmk6g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-mmk6g true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e68760} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e68780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:07:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:07:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:07:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:07:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.249,StartTime:2020-05-12 10:07:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-12 10:07:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://590c9aa683fe940faac94abde2ed634f4310c1be8199befcd2a1c90171729347}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 12 10:07:56.262: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 10:07:58.265: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:07:58.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-6cw5w" for this suite. May 12 10:08:44.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:08:44.347: INFO: namespace: e2e-tests-events-6cw5w, resource: bindings, ignored listing per whitelist May 12 10:08:44.422: INFO: namespace e2e-tests-events-6cw5w deletion completed in 46.121693245s • [SLOW TEST:56.539 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:08:44.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-905d4861-9438-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 10:08:44.961: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-8r7rx" to be "success or failure" May 12 10:08:44.994: INFO: Pod "pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.135209ms May 12 10:08:46.997: INFO: Pod "pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036243426s May 12 10:08:49.001: INFO: Pod "pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040158883s May 12 10:08:51.005: INFO: Pod "pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043994613s STEP: Saw pod success May 12 10:08:51.005: INFO: Pod "pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:08:51.007: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 10:08:51.040: INFO: Waiting for pod pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c to disappear May 12 10:08:51.049: INFO: Pod pod-projected-configmaps-905dad7c-9438-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:08:51.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8r7rx" for this suite. May 12 10:08:59.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:08:59.101: INFO: namespace: e2e-tests-projected-8r7rx, resource: bindings, ignored listing per whitelist May 12 10:08:59.167: INFO: namespace e2e-tests-projected-8r7rx deletion completed in 8.115230827s • [SLOW TEST:14.745 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:08:59.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:09:07.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-bvtm4" for this suite. May 12 10:09:15.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:09:15.723: INFO: namespace: e2e-tests-kubelet-test-bvtm4, resource: bindings, ignored listing per whitelist May 12 10:09:15.758: INFO: namespace e2e-tests-kubelet-test-bvtm4 deletion completed in 8.38548617s • [SLOW TEST:16.590 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:09:15.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:09:16.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-gc66s" to be "success or failure" May 12 10:09:16.535: INFO: Pod "downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.201574ms May 12 10:09:18.615: INFO: Pod "downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083199706s May 12 10:09:20.619: INFO: Pod "downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086936449s May 12 10:09:22.898: INFO: Pod "downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.366484746s STEP: Saw pod success May 12 10:09:22.898: INFO: Pod "downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:09:23.052: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:09:23.091: INFO: Waiting for pod downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c to disappear May 12 10:09:23.507: INFO: Pod downwardapi-volume-a2ea09c0-9438-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:09:23.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gc66s" for this suite. May 12 10:09:29.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:09:29.868: INFO: namespace: e2e-tests-projected-gc66s, resource: bindings, ignored listing per whitelist May 12 10:09:29.876: INFO: namespace e2e-tests-projected-gc66s deletion completed in 6.363756053s • [SLOW TEST:14.118 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:09:29.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vxjqz May 12 10:09:36.409: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vxjqz STEP: checking the pod's current state and verifying that restartCount is present May 12 10:09:36.411: INFO: Initial restart count of pod liveness-http is 0 May 12 10:09:58.541: INFO: Restart count of pod e2e-tests-container-probe-vxjqz/liveness-http is now 1 (22.12980649s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:09:58.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vxjqz" for this suite. May 12 10:10:04.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:10:04.992: INFO: namespace: e2e-tests-container-probe-vxjqz, resource: bindings, ignored listing per whitelist May 12 10:10:05.038: INFO: namespace e2e-tests-container-probe-vxjqz deletion completed in 6.358045371s • [SLOW TEST:35.162 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:10:05.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 10:10:06.084: INFO: Waiting up to 5m0s for pod "pod-c0bb10b5-9438-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-bnb2r" to be "success or failure" May 12 10:10:06.128: INFO: Pod "pod-c0bb10b5-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.661787ms May 12 10:10:08.132: INFO: Pod "pod-c0bb10b5-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047892576s May 12 10:10:10.136: INFO: Pod "pod-c0bb10b5-9438-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051701023s May 12 10:10:12.155: INFO: Pod "pod-c0bb10b5-9438-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070035165s STEP: Saw pod success May 12 10:10:12.155: INFO: Pod "pod-c0bb10b5-9438-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:10:12.157: INFO: Trying to get logs from node hunter-worker pod pod-c0bb10b5-9438-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:10:12.336: INFO: Waiting for pod pod-c0bb10b5-9438-11ea-92b2-0242ac11001c to disappear May 12 10:10:12.387: INFO: Pod pod-c0bb10b5-9438-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:10:12.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bnb2r" for this suite. May 12 10:10:18.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:10:18.480: INFO: namespace: e2e-tests-emptydir-bnb2r, resource: bindings, ignored listing per whitelist May 12 10:10:18.482: INFO: namespace e2e-tests-emptydir-bnb2r deletion completed in 6.091155914s • [SLOW TEST:13.444 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:10:18.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-44lxp May 12 10:10:22.649: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-44lxp STEP: checking the pod's current state and verifying that restartCount is present May 12 10:10:22.652: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:14:24.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-44lxp" for this suite. May 12 10:14:30.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:14:30.698: INFO: namespace: e2e-tests-container-probe-44lxp, resource: bindings, ignored listing per whitelist May 12 10:14:30.698: INFO: namespace e2e-tests-container-probe-44lxp deletion completed in 6.096542264s • [SLOW TEST:252.216 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:14:30.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:14:30.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xkbm6' May 12 10:14:30.894: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 10:14:30.894: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 12 10:14:30.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-xkbm6' May 12 10:14:31.042: INFO: stderr: "" May 12 10:14:31.042: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:14:31.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xkbm6" for this suite. May 12 10:14:53.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:14:53.093: INFO: namespace: e2e-tests-kubectl-xkbm6, resource: bindings, ignored listing per whitelist May 12 10:14:53.271: INFO: namespace e2e-tests-kubectl-xkbm6 deletion completed in 22.225797071s • [SLOW TEST:22.574 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:14:53.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 10:14:53.481: INFO: Waiting up to 5m0s for pod "pod-6c084818-9439-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-gwk49" to be "success or failure" May 12 10:14:53.528: INFO: Pod "pod-6c084818-9439-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.661273ms May 12 10:14:55.531: INFO: Pod "pod-6c084818-9439-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049698811s May 12 10:14:57.534: INFO: Pod "pod-6c084818-9439-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052889335s STEP: Saw pod success May 12 10:14:57.534: INFO: Pod "pod-6c084818-9439-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:14:57.536: INFO: Trying to get logs from node hunter-worker pod pod-6c084818-9439-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:14:57.860: INFO: Waiting for pod pod-6c084818-9439-11ea-92b2-0242ac11001c to disappear May 12 10:14:57.918: INFO: Pod pod-6c084818-9439-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:14:57.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gwk49" for this suite. May 12 10:15:03.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:15:04.044: INFO: namespace: e2e-tests-emptydir-gwk49, resource: bindings, ignored listing per whitelist May 12 10:15:04.048: INFO: namespace e2e-tests-emptydir-gwk49 deletion completed in 6.127783995s • [SLOW TEST:10.777 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:15:04.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-mtmct [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 12 10:15:04.179: INFO: Found 0 stateful pods, waiting for 3 May 12 10:15:14.184: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:15:14.184: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:15:14.184: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 10:15:24.490: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:15:24.490: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:15:24.490: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 10:15:24.512: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 12 10:15:34.764: INFO: Updating stateful set ss2 May 12 10:15:34.941: INFO: Waiting for Pod e2e-tests-statefulset-mtmct/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 12 10:15:46.424: INFO: Found 2 stateful pods, waiting for 3 May 12 10:15:56.429: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:15:56.429: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:15:56.429: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 10:16:06.429: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:16:06.429: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:16:06.429: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 12 10:16:07.276: INFO: Updating stateful set ss2 May 12 10:16:07.287: INFO: Waiting for Pod e2e-tests-statefulset-mtmct/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:16:17.303: INFO: Waiting for Pod e2e-tests-statefulset-mtmct/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:16:27.728: INFO: Updating stateful set ss2 May 12 10:16:27.780: INFO: Waiting for StatefulSet e2e-tests-statefulset-mtmct/ss2 to complete update May 12 10:16:27.780: INFO: Waiting for Pod e2e-tests-statefulset-mtmct/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:16:37.785: INFO: Waiting for StatefulSet e2e-tests-statefulset-mtmct/ss2 to complete update May 12 10:16:37.785: INFO: Waiting for Pod e2e-tests-statefulset-mtmct/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:16:48.214: INFO: Waiting for StatefulSet e2e-tests-statefulset-mtmct/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 10:16:58.178: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mtmct May 12 10:16:58.180: INFO: Scaling statefulset ss2 to 0 May 12 10:17:29.010: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:17:29.013: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:17:29.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-mtmct" for this suite. May 12 10:17:37.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:17:37.778: INFO: namespace: e2e-tests-statefulset-mtmct, resource: bindings, ignored listing per whitelist May 12 10:17:37.817: INFO: namespace e2e-tests-statefulset-mtmct deletion completed in 8.507046904s • [SLOW TEST:153.768 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:17:37.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:17:37.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-4vqlg" to be "success or failure" May 12 10:17:37.986: INFO: Pod "downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.003434ms May 12 10:17:39.990: INFO: Pod "downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02026619s May 12 10:17:42.052: INFO: Pod "downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08165333s STEP: Saw pod success May 12 10:17:42.052: INFO: Pod "downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:17:42.055: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:17:42.129: INFO: Waiting for pod downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c to disappear May 12 10:17:42.251: INFO: Pod downwardapi-volume-ce1209a9-9439-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:17:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4vqlg" for this suite. May 12 10:17:48.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:17:48.433: INFO: namespace: e2e-tests-downward-api-4vqlg, resource: bindings, ignored listing per whitelist May 12 10:17:48.448: INFO: namespace e2e-tests-downward-api-4vqlg deletion completed in 6.192882086s • [SLOW TEST:10.631 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:17:48.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nj2qc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nj2qc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.180.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.180.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.180.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.180.170_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nj2qc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nj2qc.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nj2qc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nj2qc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.180.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.180.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.180.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.180.170_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:18:00.527: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.539: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.568: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.571: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.573: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.576: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.578: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.580: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.584: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:00.600: INFO: Lookups using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc] May 12 10:18:05.605: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.611: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.647: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.650: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.653: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.655: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.657: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.659: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.662: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.664: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:05.676: INFO: Lookups using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc] May 12 10:18:10.604: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.610: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.641: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.644: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.647: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.649: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.652: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.655: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.657: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.660: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:10.676: INFO: Lookups using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc] May 12 10:18:15.616: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.621: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.652: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.654: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.657: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.659: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.661: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.663: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.666: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:15.682: INFO: Lookups using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc] May 12 10:18:20.605: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.613: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.644: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.647: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.650: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.652: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.655: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.657: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.659: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:20.679: INFO: Lookups using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc] May 12 10:18:25.814: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.820: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.926: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.929: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.932: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.934: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.936: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.939: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.941: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.943: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc from pod e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c: the server could not find the requested resource (get pods dns-test-d51edf37-9439-11ea-92b2-0242ac11001c) May 12 10:18:25.954: INFO: Lookups using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nj2qc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc jessie_udp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@dns-test-service.e2e-tests-dns-nj2qc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nj2qc.svc] May 12 10:18:30.690: INFO: DNS probes using e2e-tests-dns-nj2qc/dns-test-d51edf37-9439-11ea-92b2-0242ac11001c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:18:31.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-nj2qc" for this suite. May 12 10:18:37.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:18:37.923: INFO: namespace: e2e-tests-dns-nj2qc, resource: bindings, ignored listing per whitelist May 12 10:18:37.928: INFO: namespace e2e-tests-dns-nj2qc deletion completed in 6.177697197s • [SLOW TEST:49.480 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:18:37.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:18:38.107: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f1e205c4-9439-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001c3627a), BlockOwnerDeletion:(*bool)(0xc001c3627b)}} May 12 10:18:38.146: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f1e01000-9439-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001c36432), BlockOwnerDeletion:(*bool)(0xc001c36433)}} May 12 10:18:38.185: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f1e09042-9439-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001c3a65a), BlockOwnerDeletion:(*bool)(0xc001c3a65b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:18:43.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-p4q86" for this suite. May 12 10:18:51.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:18:51.364: INFO: namespace: e2e-tests-gc-p4q86, resource: bindings, ignored listing per whitelist May 12 10:18:51.385: INFO: namespace e2e-tests-gc-p4q86 deletion completed in 8.093040216s • [SLOW TEST:13.457 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:18:51.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 12 10:19:01.143: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:19:01.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-mvxjw" for this suite. May 12 10:19:27.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:19:27.320: INFO: namespace: e2e-tests-replicaset-mvxjw, resource: bindings, ignored listing per whitelist May 12 10:19:27.335: INFO: namespace e2e-tests-replicaset-mvxjw deletion completed in 26.142910739s • [SLOW TEST:35.949 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:19:27.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:19:27.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4pgsb" for this suite. May 12 10:19:35.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:19:35.702: INFO: namespace: e2e-tests-kubelet-test-4pgsb, resource: bindings, ignored listing per whitelist May 12 10:19:35.744: INFO: namespace e2e-tests-kubelet-test-4pgsb deletion completed in 8.248414643s • [SLOW TEST:8.409 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:19:35.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-sp2w STEP: Creating a pod to test atomic-volume-subpath May 12 10:19:36.116: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sp2w" in namespace "e2e-tests-subpath-qxfhf" to be "success or failure" May 12 10:19:36.150: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Pending", Reason="", readiness=false. Elapsed: 33.860929ms May 12 10:19:38.154: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037421217s May 12 10:19:40.159: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043010777s May 12 10:19:42.167: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05035704s May 12 10:19:44.170: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053460159s May 12 10:19:46.174: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=true. Elapsed: 10.05783313s May 12 10:19:48.178: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 12.061318151s May 12 10:19:50.182: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 14.065379102s May 12 10:19:52.186: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 16.069336657s May 12 10:19:54.335: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 18.21899598s May 12 10:19:56.340: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 20.223574101s May 12 10:19:58.344: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 22.227456344s May 12 10:20:00.348: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 24.231689979s May 12 10:20:02.352: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Running", Reason="", readiness=false. Elapsed: 26.235564975s May 12 10:20:04.360: INFO: Pod "pod-subpath-test-configmap-sp2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.243293568s STEP: Saw pod success May 12 10:20:04.360: INFO: Pod "pod-subpath-test-configmap-sp2w" satisfied condition "success or failure" May 12 10:20:04.366: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-sp2w container test-container-subpath-configmap-sp2w: STEP: delete the pod May 12 10:20:04.565: INFO: Waiting for pod pod-subpath-test-configmap-sp2w to disappear May 12 10:20:04.575: INFO: Pod pod-subpath-test-configmap-sp2w no longer exists STEP: Deleting pod pod-subpath-test-configmap-sp2w May 12 10:20:04.575: INFO: Deleting pod "pod-subpath-test-configmap-sp2w" in namespace "e2e-tests-subpath-qxfhf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:20:04.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qxfhf" for this suite. May 12 10:20:10.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:20:10.666: INFO: namespace: e2e-tests-subpath-qxfhf, resource: bindings, ignored listing per whitelist May 12 10:20:10.686: INFO: namespace e2e-tests-subpath-qxfhf deletion completed in 6.10658538s • [SLOW TEST:34.942 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:20:10.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:20:17.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-b7kfr" for this suite. May 12 10:21:05.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:21:05.203: INFO: namespace: e2e-tests-kubelet-test-b7kfr, resource: bindings, ignored listing per whitelist May 12 10:21:05.255: INFO: namespace e2e-tests-kubelet-test-b7kfr deletion completed in 48.093314729s • [SLOW TEST:54.568 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:21:05.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 10:21:06.038: INFO: Waiting up to 5m0s for pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-pxvsn" to be "success or failure" May 12 10:21:06.487: INFO: Pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 448.917567ms May 12 10:21:08.490: INFO: Pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452358656s May 12 10:21:10.495: INFO: Pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456956188s May 12 10:21:12.499: INFO: Pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.461153525s May 12 10:21:14.503: INFO: Pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.465333071s STEP: Saw pod success May 12 10:21:14.503: INFO: Pod "pod-49f9ab71-943a-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:21:14.507: INFO: Trying to get logs from node hunter-worker2 pod pod-49f9ab71-943a-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:21:14.534: INFO: Waiting for pod pod-49f9ab71-943a-11ea-92b2-0242ac11001c to disappear May 12 10:21:14.550: INFO: Pod pod-49f9ab71-943a-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:21:14.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pxvsn" for this suite. May 12 10:21:20.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:21:21.034: INFO: namespace: e2e-tests-emptydir-pxvsn, resource: bindings, ignored listing per whitelist May 12 10:21:21.036: INFO: namespace e2e-tests-emptydir-pxvsn deletion completed in 6.482679224s • [SLOW TEST:15.781 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:21:21.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:21:27.766: INFO: Waiting up to 5m0s for pod "client-envvars-5707296f-943a-11ea-92b2-0242ac11001c" in namespace "e2e-tests-pods-gx54w" to be "success or failure" May 12 10:21:27.857: INFO: Pod "client-envvars-5707296f-943a-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 90.250633ms May 12 10:21:29.911: INFO: Pod "client-envvars-5707296f-943a-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14489915s May 12 10:21:31.915: INFO: Pod "client-envvars-5707296f-943a-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14862956s May 12 10:21:33.918: INFO: Pod "client-envvars-5707296f-943a-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151839899s STEP: Saw pod success May 12 10:21:33.918: INFO: Pod "client-envvars-5707296f-943a-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:21:33.920: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-5707296f-943a-11ea-92b2-0242ac11001c container env3cont: STEP: delete the pod May 12 10:21:34.195: INFO: Waiting for pod client-envvars-5707296f-943a-11ea-92b2-0242ac11001c to disappear May 12 10:21:34.654: INFO: Pod client-envvars-5707296f-943a-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:21:34.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gx54w" for this suite. May 12 10:22:26.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:22:26.718: INFO: namespace: e2e-tests-pods-gx54w, resource: bindings, ignored listing per whitelist May 12 10:22:26.854: INFO: namespace e2e-tests-pods-gx54w deletion completed in 52.19740734s • [SLOW TEST:65.817 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:22:26.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8m85l [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 12 10:22:28.314: INFO: Found 0 stateful pods, waiting for 3 May 12 10:22:38.318: INFO: Found 2 stateful pods, waiting for 3 May 12 10:22:48.318: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:22:48.318: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:22:48.318: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 10:22:58.488: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:22:58.488: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:22:58.488: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 10:22:58.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8m85l ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:22:59.378: INFO: stderr: "I0512 10:22:58.623581 239 log.go:172] (0xc000744370) (0xc0005b5400) Create stream\nI0512 10:22:58.623633 239 log.go:172] (0xc000744370) (0xc0005b5400) Stream added, broadcasting: 1\nI0512 10:22:58.626276 239 log.go:172] (0xc000744370) Reply frame received for 1\nI0512 10:22:58.626332 239 log.go:172] (0xc000744370) (0xc000764000) Create stream\nI0512 10:22:58.626348 239 log.go:172] (0xc000744370) (0xc000764000) Stream added, broadcasting: 3\nI0512 10:22:58.627219 239 log.go:172] (0xc000744370) Reply frame received for 3\nI0512 10:22:58.627256 239 log.go:172] (0xc000744370) (0xc0006ac000) Create stream\nI0512 10:22:58.627270 239 log.go:172] (0xc000744370) (0xc0006ac000) Stream added, broadcasting: 5\nI0512 10:22:58.627997 239 log.go:172] (0xc000744370) Reply frame received for 5\nI0512 10:22:59.369705 239 log.go:172] (0xc000744370) Data frame received for 3\nI0512 10:22:59.369743 239 log.go:172] (0xc000764000) (3) Data frame handling\nI0512 10:22:59.369773 239 log.go:172] (0xc000764000) (3) Data frame sent\nI0512 10:22:59.370110 239 log.go:172] (0xc000744370) Data frame received for 5\nI0512 10:22:59.370147 239 log.go:172] (0xc0006ac000) (5) Data frame handling\nI0512 10:22:59.370708 239 log.go:172] (0xc000744370) Data frame received for 3\nI0512 10:22:59.370741 239 log.go:172] (0xc000764000) (3) Data frame handling\nI0512 10:22:59.372433 239 log.go:172] (0xc000744370) Data frame received for 1\nI0512 10:22:59.372459 239 log.go:172] (0xc0005b5400) (1) Data frame handling\nI0512 10:22:59.372499 239 log.go:172] (0xc0005b5400) (1) Data frame sent\nI0512 10:22:59.372517 239 log.go:172] (0xc000744370) (0xc0005b5400) Stream removed, broadcasting: 1\nI0512 10:22:59.372550 239 log.go:172] (0xc000744370) Go away received\nI0512 10:22:59.373510 239 log.go:172] (0xc000744370) (0xc0005b5400) Stream removed, broadcasting: 1\nI0512 10:22:59.373546 239 log.go:172] (0xc000744370) (0xc000764000) Stream removed, broadcasting: 3\nI0512 10:22:59.373571 239 log.go:172] (0xc000744370) (0xc0006ac000) Stream removed, broadcasting: 5\n" May 12 10:22:59.378: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:22:59.378: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 10:23:09.407: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 10:23:19.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8m85l ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:23:19.926: INFO: stderr: "I0512 10:23:19.858498 261 log.go:172] (0xc0008402c0) (0xc0005d34a0) Create stream\nI0512 10:23:19.858538 261 log.go:172] (0xc0008402c0) (0xc0005d34a0) Stream added, broadcasting: 1\nI0512 10:23:19.861064 261 log.go:172] (0xc0008402c0) Reply frame received for 1\nI0512 10:23:19.861089 261 log.go:172] (0xc0008402c0) (0xc0005d3540) Create stream\nI0512 10:23:19.861096 261 log.go:172] (0xc0008402c0) (0xc0005d3540) Stream added, broadcasting: 3\nI0512 10:23:19.861788 261 log.go:172] (0xc0008402c0) Reply frame received for 3\nI0512 10:23:19.861817 261 log.go:172] (0xc0008402c0) (0xc0005d35e0) Create stream\nI0512 10:23:19.861830 261 log.go:172] (0xc0008402c0) (0xc0005d35e0) Stream added, broadcasting: 5\nI0512 10:23:19.862522 261 log.go:172] (0xc0008402c0) Reply frame received for 5\nI0512 10:23:19.919724 261 log.go:172] (0xc0008402c0) Data frame received for 5\nI0512 10:23:19.919808 261 log.go:172] (0xc0005d35e0) (5) Data frame handling\nI0512 10:23:19.919854 261 log.go:172] (0xc0008402c0) Data frame received for 3\nI0512 10:23:19.919899 261 log.go:172] (0xc0005d3540) (3) Data frame handling\nI0512 10:23:19.919939 261 log.go:172] (0xc0005d3540) (3) Data frame sent\nI0512 10:23:19.920044 261 log.go:172] (0xc0008402c0) Data frame received for 3\nI0512 10:23:19.920058 261 log.go:172] (0xc0005d3540) (3) Data frame handling\nI0512 10:23:19.922053 261 log.go:172] (0xc0008402c0) Data frame received for 1\nI0512 10:23:19.922065 261 log.go:172] (0xc0005d34a0) (1) Data frame handling\nI0512 10:23:19.922071 261 log.go:172] (0xc0005d34a0) (1) Data frame sent\nI0512 10:23:19.922200 261 log.go:172] (0xc0008402c0) (0xc0005d34a0) Stream removed, broadcasting: 1\nI0512 10:23:19.922323 261 log.go:172] (0xc0008402c0) Go away received\nI0512 10:23:19.922360 261 log.go:172] (0xc0008402c0) (0xc0005d34a0) Stream removed, broadcasting: 1\nI0512 10:23:19.922384 261 log.go:172] (0xc0008402c0) (0xc0005d3540) Stream removed, broadcasting: 3\nI0512 10:23:19.922400 261 log.go:172] (0xc0008402c0) (0xc0005d35e0) Stream removed, broadcasting: 5\n" May 12 10:23:19.926: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:23:19.926: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:23:30.166: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:23:30.166: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:23:30.166: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:23:30.166: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:23:40.170: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:23:40.170: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:23:40.170: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:23:50.171: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:23:50.171: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:23:50.171: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:24:00.173: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:24:00.173: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:24:10.170: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:24:10.170: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 12 10:24:20.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8m85l ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:24:20.617: INFO: stderr: "I0512 10:24:20.295497 283 log.go:172] (0xc000138790) (0xc000629720) Create stream\nI0512 10:24:20.295563 283 log.go:172] (0xc000138790) (0xc000629720) Stream added, broadcasting: 1\nI0512 10:24:20.298037 283 log.go:172] (0xc000138790) Reply frame received for 1\nI0512 10:24:20.298081 283 log.go:172] (0xc000138790) (0xc0003e83c0) Create stream\nI0512 10:24:20.298097 283 log.go:172] (0xc000138790) (0xc0003e83c0) Stream added, broadcasting: 3\nI0512 10:24:20.298981 283 log.go:172] (0xc000138790) Reply frame received for 3\nI0512 10:24:20.299028 283 log.go:172] (0xc000138790) (0xc0003e8460) Create stream\nI0512 10:24:20.299043 283 log.go:172] (0xc000138790) (0xc0003e8460) Stream added, broadcasting: 5\nI0512 10:24:20.299933 283 log.go:172] (0xc000138790) Reply frame received for 5\nI0512 10:24:20.607855 283 log.go:172] (0xc000138790) Data frame received for 3\nI0512 10:24:20.607896 283 log.go:172] (0xc0003e83c0) (3) Data frame handling\nI0512 10:24:20.607924 283 log.go:172] (0xc0003e83c0) (3) Data frame sent\nI0512 10:24:20.607941 283 log.go:172] (0xc000138790) Data frame received for 3\nI0512 10:24:20.607963 283 log.go:172] (0xc0003e83c0) (3) Data frame handling\nI0512 10:24:20.608173 283 log.go:172] (0xc000138790) Data frame received for 5\nI0512 10:24:20.608196 283 log.go:172] (0xc0003e8460) (5) Data frame handling\nI0512 10:24:20.610415 283 log.go:172] (0xc000138790) Data frame received for 1\nI0512 10:24:20.610519 283 log.go:172] (0xc000629720) (1) Data frame handling\nI0512 10:24:20.610587 283 log.go:172] (0xc000629720) (1) Data frame sent\nI0512 10:24:20.610614 283 log.go:172] (0xc000138790) (0xc000629720) Stream removed, broadcasting: 1\nI0512 10:24:20.610637 283 log.go:172] (0xc000138790) Go away received\nI0512 10:24:20.610992 283 log.go:172] (0xc000138790) (0xc000629720) Stream removed, broadcasting: 1\nI0512 10:24:20.611011 283 log.go:172] (0xc000138790) (0xc0003e83c0) Stream removed, broadcasting: 3\nI0512 10:24:20.611019 283 log.go:172] (0xc000138790) (0xc0003e8460) Stream removed, broadcasting: 5\n" May 12 10:24:20.617: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:24:20.617: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:24:30.787: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 10:24:41.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8m85l ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:24:41.829: INFO: stderr: "I0512 10:24:41.588275 306 log.go:172] (0xc000138630) (0xc00085ed20) Create stream\nI0512 10:24:41.588337 306 log.go:172] (0xc000138630) (0xc00085ed20) Stream added, broadcasting: 1\nI0512 10:24:41.592068 306 log.go:172] (0xc000138630) Reply frame received for 1\nI0512 10:24:41.592135 306 log.go:172] (0xc000138630) (0xc00042a640) Create stream\nI0512 10:24:41.592162 306 log.go:172] (0xc000138630) (0xc00042a640) Stream added, broadcasting: 3\nI0512 10:24:41.593579 306 log.go:172] (0xc000138630) Reply frame received for 3\nI0512 10:24:41.593614 306 log.go:172] (0xc000138630) (0xc0004555e0) Create stream\nI0512 10:24:41.593623 306 log.go:172] (0xc000138630) (0xc0004555e0) Stream added, broadcasting: 5\nI0512 10:24:41.594497 306 log.go:172] (0xc000138630) Reply frame received for 5\nI0512 10:24:41.819536 306 log.go:172] (0xc000138630) Data frame received for 3\nI0512 10:24:41.819607 306 log.go:172] (0xc00042a640) (3) Data frame handling\nI0512 10:24:41.819652 306 log.go:172] (0xc00042a640) (3) Data frame sent\nI0512 10:24:41.819806 306 log.go:172] (0xc000138630) Data frame received for 5\nI0512 10:24:41.819852 306 log.go:172] (0xc0004555e0) (5) Data frame handling\nI0512 10:24:41.819937 306 log.go:172] (0xc000138630) Data frame received for 3\nI0512 10:24:41.819964 306 log.go:172] (0xc00042a640) (3) Data frame handling\nI0512 10:24:41.823578 306 log.go:172] (0xc000138630) Data frame received for 1\nI0512 10:24:41.823612 306 log.go:172] (0xc00085ed20) (1) Data frame handling\nI0512 10:24:41.823621 306 log.go:172] (0xc00085ed20) (1) Data frame sent\nI0512 10:24:41.823629 306 log.go:172] (0xc000138630) (0xc00085ed20) Stream removed, broadcasting: 1\nI0512 10:24:41.823641 306 log.go:172] (0xc000138630) Go away received\nI0512 10:24:41.823903 306 log.go:172] (0xc000138630) (0xc00085ed20) Stream removed, broadcasting: 1\nI0512 10:24:41.823940 306 log.go:172] (0xc000138630) (0xc00042a640) Stream removed, broadcasting: 3\nI0512 10:24:41.823959 306 log.go:172] (0xc000138630) (0xc0004555e0) Stream removed, broadcasting: 5\n" May 12 10:24:41.829: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:24:41.829: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:24:52.586: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:24:52.586: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:24:52.586: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:25:03.416: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:25:03.416: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:25:03.416: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:25:13.078: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update May 12 10:25:13.078: INFO: Waiting for Pod e2e-tests-statefulset-8m85l/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:25:22.591: INFO: Waiting for StatefulSet e2e-tests-statefulset-8m85l/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 10:25:32.592: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8m85l May 12 10:25:32.594: INFO: Scaling statefulset ss2 to 0 May 12 10:26:02.825: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:26:02.827: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:26:03.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8m85l" for this suite. May 12 10:26:18.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:26:18.077: INFO: namespace: e2e-tests-statefulset-8m85l, resource: bindings, ignored listing per whitelist May 12 10:26:18.097: INFO: namespace e2e-tests-statefulset-8m85l deletion completed in 15.025383908s • [SLOW TEST:231.243 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:26:18.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:26:18.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-4mv7g" to be "success or failure" May 12 10:26:18.175: INFO: Pod "downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595922ms May 12 10:26:20.270: INFO: Pod "downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104788686s May 12 10:26:22.274: INFO: Pod "downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.108424327s May 12 10:26:24.277: INFO: Pod "downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111815281s STEP: Saw pod success May 12 10:26:24.277: INFO: Pod "downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:26:24.279: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:26:24.761: INFO: Waiting for pod downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c to disappear May 12 10:26:24.809: INFO: Pod downwardapi-volume-0420c1d0-943b-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:26:24.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4mv7g" for this suite. May 12 10:26:31.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:26:31.270: INFO: namespace: e2e-tests-downward-api-4mv7g, resource: bindings, ignored listing per whitelist May 12 10:26:31.310: INFO: namespace e2e-tests-downward-api-4mv7g deletion completed in 6.497043192s • [SLOW TEST:13.213 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:26:31.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0c2118f5-943b-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:26:31.995: INFO: Waiting up to 5m0s for pod "pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-mq8tl" to be "success or failure" May 12 10:26:32.179: INFO: Pod "pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 184.121182ms May 12 10:26:34.215: INFO: Pod "pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220013677s May 12 10:26:36.217: INFO: Pod "pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222721136s May 12 10:26:38.365: INFO: Pod "pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.370029992s STEP: Saw pod success May 12 10:26:38.365: INFO: Pod "pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:26:38.752: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 10:26:39.266: INFO: Waiting for pod pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c to disappear May 12 10:26:39.388: INFO: Pod pod-secrets-0c5c9397-943b-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:26:39.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mq8tl" for this suite. May 12 10:26:49.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:26:49.569: INFO: namespace: e2e-tests-secrets-mq8tl, resource: bindings, ignored listing per whitelist May 12 10:26:49.577: INFO: namespace e2e-tests-secrets-mq8tl deletion completed in 10.156111771s STEP: Destroying namespace "e2e-tests-secret-namespace-b8cd2" for this suite. May 12 10:26:55.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:26:55.971: INFO: namespace: e2e-tests-secret-namespace-b8cd2, resource: bindings, ignored listing per whitelist May 12 10:26:56.007: INFO: namespace e2e-tests-secret-namespace-b8cd2 deletion completed in 6.429145662s • [SLOW TEST:24.696 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:26:56.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 12 10:26:56.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:02.226: INFO: stderr: "" May 12 10:27:02.226: INFO: stdout: "pod/pause created\n" May 12 10:27:02.226: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 10:27:02.226: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-7wcsv" to be "running and ready" May 12 10:27:02.395: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 169.250032ms May 12 10:27:04.443: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217193191s May 12 10:27:06.447: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221070845s May 12 10:27:08.451: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.225016805s May 12 10:27:08.451: INFO: Pod "pause" satisfied condition "running and ready" May 12 10:27:08.451: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 12 10:27:08.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:08.645: INFO: stderr: "" May 12 10:27:08.645: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 10:27:08.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:08.887: INFO: stderr: "" May 12 10:27:08.887: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 10:27:08.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:09.336: INFO: stderr: "" May 12 10:27:09.336: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 10:27:09.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:09.527: INFO: stderr: "" May 12 10:27:09.527: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 12 10:27:09.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:09.744: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:27:09.744: INFO: stdout: "pod \"pause\" force deleted\n" May 12 10:27:09.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-7wcsv' May 12 10:27:09.863: INFO: stderr: "No resources found.\n" May 12 10:27:09.863: INFO: stdout: "" May 12 10:27:09.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-7wcsv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:27:09.966: INFO: stderr: "" May 12 10:27:09.966: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:27:09.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7wcsv" for this suite. May 12 10:27:16.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:27:16.012: INFO: namespace: e2e-tests-kubectl-7wcsv, resource: bindings, ignored listing per whitelist May 12 10:27:16.074: INFO: namespace e2e-tests-kubectl-7wcsv deletion completed in 6.105557007s • [SLOW TEST:20.067 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:27:16.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-w424 STEP: Creating a pod to test atomic-volume-subpath May 12 10:27:17.306: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-w424" in namespace "e2e-tests-subpath-pwzxp" to be "success or failure" May 12 10:27:17.658: INFO: Pod "pod-subpath-test-projected-w424": Phase="Pending", Reason="", readiness=false. Elapsed: 352.396141ms May 12 10:27:19.663: INFO: Pod "pod-subpath-test-projected-w424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35686756s May 12 10:27:21.769: INFO: Pod "pod-subpath-test-projected-w424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463077289s May 12 10:27:23.772: INFO: Pod "pod-subpath-test-projected-w424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466118287s May 12 10:27:25.776: INFO: Pod "pod-subpath-test-projected-w424": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470364568s May 12 10:27:27.781: INFO: Pod "pod-subpath-test-projected-w424": Phase="Pending", Reason="", readiness=false. Elapsed: 10.474805522s May 12 10:27:29.784: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 12.477564534s May 12 10:27:31.786: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 14.480341951s May 12 10:27:33.790: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 16.48405331s May 12 10:27:35.904: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 18.597976962s May 12 10:27:37.907: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 20.601189373s May 12 10:27:40.174: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 22.868017391s May 12 10:27:42.347: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 25.040864826s May 12 10:27:44.350: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 27.04426387s May 12 10:27:46.354: INFO: Pod "pod-subpath-test-projected-w424": Phase="Running", Reason="", readiness=false. Elapsed: 29.048222321s May 12 10:27:48.358: INFO: Pod "pod-subpath-test-projected-w424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.051465272s STEP: Saw pod success May 12 10:27:48.358: INFO: Pod "pod-subpath-test-projected-w424" satisfied condition "success or failure" May 12 10:27:48.360: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-w424 container test-container-subpath-projected-w424: STEP: delete the pod May 12 10:27:48.585: INFO: Waiting for pod pod-subpath-test-projected-w424 to disappear May 12 10:27:48.772: INFO: Pod pod-subpath-test-projected-w424 no longer exists STEP: Deleting pod pod-subpath-test-projected-w424 May 12 10:27:48.772: INFO: Deleting pod "pod-subpath-test-projected-w424" in namespace "e2e-tests-subpath-pwzxp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:27:48.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-pwzxp" for this suite. May 12 10:27:56.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:27:56.825: INFO: namespace: e2e-tests-subpath-pwzxp, resource: bindings, ignored listing per whitelist May 12 10:27:56.874: INFO: namespace e2e-tests-subpath-pwzxp deletion completed in 8.096110266s • [SLOW TEST:40.800 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:27:56.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:28:12.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lbj8l" for this suite. May 12 10:28:39.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:28:39.236: INFO: namespace: e2e-tests-replication-controller-lbj8l, resource: bindings, ignored listing per whitelist May 12 10:28:39.236: INFO: namespace e2e-tests-replication-controller-lbj8l deletion completed in 26.661456619s • [SLOW TEST:42.362 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:28:39.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:28:39.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hd5sp' May 12 10:28:39.679: INFO: stderr: "" May 12 10:28:39.679: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 12 10:28:39.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hd5sp' May 12 10:28:51.741: INFO: stderr: "" May 12 10:28:51.741: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:28:51.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hd5sp" for this suite. May 12 10:29:00.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:29:01.300: INFO: namespace: e2e-tests-kubectl-hd5sp, resource: bindings, ignored listing per whitelist May 12 10:29:01.315: INFO: namespace e2e-tests-kubectl-hd5sp deletion completed in 9.567465083s • [SLOW TEST:22.079 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:29:01.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 12 10:29:01.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:03.086: INFO: stderr: "" May 12 10:29:03.086: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:29:03.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:03.423: INFO: stderr: "" May 12 10:29:03.423: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 12 10:29:08.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:08.556: INFO: stderr: "" May 12 10:29:08.556: INFO: stdout: "update-demo-nautilus-jr4c5 update-demo-nautilus-lnt55 " May 12 10:29:08.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jr4c5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:08.723: INFO: stderr: "" May 12 10:29:08.723: INFO: stdout: "" May 12 10:29:08.723: INFO: update-demo-nautilus-jr4c5 is created but not running May 12 10:29:13.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:13.823: INFO: stderr: "" May 12 10:29:13.823: INFO: stdout: "update-demo-nautilus-jr4c5 update-demo-nautilus-lnt55 " May 12 10:29:13.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jr4c5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:13.918: INFO: stderr: "" May 12 10:29:13.918: INFO: stdout: "true" May 12 10:29:13.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jr4c5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:14.089: INFO: stderr: "" May 12 10:29:14.089: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:29:14.089: INFO: validating pod update-demo-nautilus-jr4c5 May 12 10:29:14.093: INFO: got data: { "image": "nautilus.jpg" } May 12 10:29:14.093: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:29:14.093: INFO: update-demo-nautilus-jr4c5 is verified up and running May 12 10:29:14.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnt55 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:14.191: INFO: stderr: "" May 12 10:29:14.191: INFO: stdout: "true" May 12 10:29:14.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnt55 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:14.288: INFO: stderr: "" May 12 10:29:14.288: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:29:14.288: INFO: validating pod update-demo-nautilus-lnt55 May 12 10:29:14.291: INFO: got data: { "image": "nautilus.jpg" } May 12 10:29:14.291: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:29:14.291: INFO: update-demo-nautilus-lnt55 is verified up and running STEP: using delete to clean up resources May 12 10:29:14.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:14.409: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:29:14.409: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 10:29:14.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-c7spx' May 12 10:29:14.931: INFO: stderr: "No resources found.\n" May 12 10:29:14.931: INFO: stdout: "" May 12 10:29:14.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-c7spx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:29:15.281: INFO: stderr: "" May 12 10:29:15.281: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:29:15.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c7spx" for this suite. May 12 10:29:23.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:29:25.316: INFO: namespace: e2e-tests-kubectl-c7spx, resource: bindings, ignored listing per whitelist May 12 10:29:25.357: INFO: namespace e2e-tests-kubectl-c7spx deletion completed in 9.900044665s • [SLOW TEST:24.042 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:29:25.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wc2q4 in namespace e2e-tests-proxy-6cnwb I0512 10:29:26.402299 6 runners.go:184] Created replication controller with name: proxy-service-wc2q4, namespace: e2e-tests-proxy-6cnwb, replica count: 1 I0512 10:29:27.452653 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:28.452832 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:29.453026 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:30.453285 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:31.453524 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:32.453787 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:33.454053 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:34.454271 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:29:35.454442 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:29:36.454613 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:29:37.454760 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:29:38.454965 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:29:39.455214 6 runners.go:184] proxy-service-wc2q4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:29:39.458: INFO: setup took 13.463445605s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 12 10:29:39.465: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6cnwb/pods/proxy-service-wc2q4-zshz4:162/proxy/: bar (200; 6.231624ms) May 12 10:29:39.465: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6cnwb/pods/proxy-service-wc2q4-zshz4:160/proxy/: foo (200; 6.690889ms) May 12 10:29:39.465: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6cnwb/services/http:proxy-service-wc2q4:portname2/proxy/: bar (200; 6.823804ms) May 12 10:29:39.466: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6cnwb/pods/http:proxy-service-wc2q4-zshz4:160/proxy/: foo (200; 7.282193ms) May 12 10:29:39.468: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6cnwb/pods/http:proxy-service-wc2q4-zshz4:162/proxy/: bar (200; 9.577641ms) May 12 10:29:39.469: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6cnwb/pods/proxy-service-wc2q4-zshz4/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 10:30:08.150: INFO: Successfully updated pod "labelsupdate890ddd39-943b-11ea-92b2-0242ac11001c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:30:10.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4222p" for this suite. May 12 10:30:33.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:30:33.259: INFO: namespace: e2e-tests-downward-api-4222p, resource: bindings, ignored listing per whitelist May 12 10:30:33.263: INFO: namespace e2e-tests-downward-api-4222p deletion completed in 22.498453787s • [SLOW TEST:32.378 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:30:33.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9c4fd561-943b-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:30:33.487: INFO: Waiting up to 5m0s for pod "pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-llwlj" to be "success or failure" May 12 10:30:33.511: INFO: Pod "pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.039449ms May 12 10:30:36.212: INFO: Pod "pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.724588261s May 12 10:30:38.260: INFO: Pod "pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.772410898s May 12 10:30:40.264: INFO: Pod "pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.777223857s STEP: Saw pod success May 12 10:30:40.265: INFO: Pod "pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:30:40.267: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 10:30:40.333: INFO: Waiting for pod pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c to disappear May 12 10:30:40.342: INFO: Pod pod-secrets-9c51ad51-943b-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:30:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-llwlj" for this suite. May 12 10:30:46.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:30:46.618: INFO: namespace: e2e-tests-secrets-llwlj, resource: bindings, ignored listing per whitelist May 12 10:30:46.662: INFO: namespace e2e-tests-secrets-llwlj deletion completed in 6.3159767s • [SLOW TEST:13.398 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:30:46.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 12 10:30:47.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kv7hf' May 12 10:30:47.710: INFO: stderr: "" May 12 10:30:47.710: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 10:30:48.835: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:48.835: INFO: Found 0 / 1 May 12 10:30:49.751: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:49.751: INFO: Found 0 / 1 May 12 10:30:50.731: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:50.731: INFO: Found 0 / 1 May 12 10:30:52.038: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:52.038: INFO: Found 0 / 1 May 12 10:30:52.743: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:52.743: INFO: Found 0 / 1 May 12 10:30:53.714: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:53.715: INFO: Found 0 / 1 May 12 10:30:54.751: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:54.752: INFO: Found 0 / 1 May 12 10:30:56.578: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:56.578: INFO: Found 1 / 1 May 12 10:30:56.578: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 12 10:30:56.599: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:56.599: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 10:30:56.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jvqp4 --namespace=e2e-tests-kubectl-kv7hf -p {"metadata":{"annotations":{"x":"y"}}}' May 12 10:30:56.960: INFO: stderr: "" May 12 10:30:56.960: INFO: stdout: "pod/redis-master-jvqp4 patched\n" STEP: checking annotations May 12 10:30:56.980: INFO: Selector matched 1 pods for map[app:redis] May 12 10:30:56.980: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:30:56.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kv7hf" for this suite. May 12 10:31:25.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:31:25.506: INFO: namespace: e2e-tests-kubectl-kv7hf, resource: bindings, ignored listing per whitelist May 12 10:31:25.506: INFO: namespace e2e-tests-kubectl-kv7hf deletion completed in 28.521829829s • [SLOW TEST:38.845 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:31:25.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-6b4zb STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-6b4zb STEP: Deleting pre-stop pod May 12 10:31:43.630: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:31:43.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-6b4zb" for this suite. May 12 10:32:28.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:32:28.103: INFO: namespace: e2e-tests-prestop-6b4zb, resource: bindings, ignored listing per whitelist May 12 10:32:28.156: INFO: namespace e2e-tests-prestop-6b4zb deletion completed in 44.50651153s • [SLOW TEST:62.650 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:32:28.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e0f92aea-943b-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 10:32:28.816: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-z8pdc" to be "success or failure" May 12 10:32:28.848: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.866684ms May 12 10:32:30.852: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035617354s May 12 10:32:33.308: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492191839s May 12 10:32:35.313: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496864917s May 12 10:32:37.316: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499802009s May 12 10:32:39.319: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.502532917s STEP: Saw pod success May 12 10:32:39.319: INFO: Pod "pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:32:39.320: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 10:32:39.748: INFO: Waiting for pod pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c to disappear May 12 10:32:40.543: INFO: Pod pod-projected-configmaps-e0fab656-943b-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:32:40.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z8pdc" for this suite. May 12 10:32:50.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:32:50.808: INFO: namespace: e2e-tests-projected-z8pdc, resource: bindings, ignored listing per whitelist May 12 10:32:51.098: INFO: namespace e2e-tests-projected-z8pdc deletion completed in 10.549097752s • [SLOW TEST:22.941 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:32:51.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-cdxsk [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-cdxsk STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-cdxsk STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-cdxsk STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-cdxsk May 12 10:33:00.650: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-cdxsk, name: ss-0, uid: f1ca38c8-943b-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 12 10:33:01.301: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-cdxsk, name: ss-0, uid: f1ca38c8-943b-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 12 10:33:01.308: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-cdxsk, name: ss-0, uid: f1ca38c8-943b-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 12 10:33:01.489: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-cdxsk STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-cdxsk STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-cdxsk and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 10:33:12.545: INFO: Deleting all statefulset in ns e2e-tests-statefulset-cdxsk May 12 10:33:12.548: INFO: Scaling statefulset ss to 0 May 12 10:33:22.802: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:33:22.804: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:33:23.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-cdxsk" for this suite. May 12 10:33:35.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:33:35.179: INFO: namespace: e2e-tests-statefulset-cdxsk, resource: bindings, ignored listing per whitelist May 12 10:33:35.180: INFO: namespace e2e-tests-statefulset-cdxsk deletion completed in 12.163155146s • [SLOW TEST:44.081 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:33:35.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-099a2179-943c-11ea-92b2-0242ac11001c STEP: Creating secret with name s-test-opt-upd-099a21f1-943c-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-099a2179-943c-11ea-92b2-0242ac11001c STEP: Updating secret s-test-opt-upd-099a21f1-943c-11ea-92b2-0242ac11001c STEP: Creating secret with name s-test-opt-create-099a2218-943c-11ea-92b2-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:34:01.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-84wg9" for this suite. May 12 10:34:31.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:34:31.160: INFO: namespace: e2e-tests-secrets-84wg9, resource: bindings, ignored listing per whitelist May 12 10:34:31.220: INFO: namespace e2e-tests-secrets-84wg9 deletion completed in 29.232070074s • [SLOW TEST:56.040 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:34:31.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:34:33.147: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 12 10:34:38.552: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 10:34:47.276: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 12 10:34:49.419: INFO: Creating deployment "test-rollover-deployment" May 12 10:34:49.934: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 12 10:34:52.199: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 12 10:34:53.084: INFO: Ensure that both replica sets have 1 created replica May 12 10:34:53.091: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 12 10:34:53.097: INFO: Updating deployment test-rollover-deployment May 12 10:34:53.097: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 12 10:34:56.044: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 12 10:34:57.224: INFO: Make sure deployment "test-rollover-deployment" is complete May 12 10:34:58.145: INFO: all replica sets need to contain the pod-template-hash label May 12 10:34:58.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:0, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876496, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:00.471: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:00.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:02.153: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:02.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:04.154: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:04.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876498, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:06.152: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:06.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876505, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:08.152: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:08.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876505, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:10.153: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:10.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876505, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:12.200: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:12.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876505, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:14.150: INFO: all replica sets need to contain the pod-template-hash label May 12 10:35:14.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876505, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:16.234: INFO: May 12 10:35:16.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876516, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876490, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:35:18.172: INFO: May 12 10:35:18.172: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 10:35:18.179: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-rdwv5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdwv5/deployments/test-rollover-deployment,UID:34df1405-943c-11ea-99e8-0242ac110002,ResourceVersion:10146948,Generation:2,CreationTimestamp:2020-05-12 10:34:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 10:34:50 +0000 UTC 2020-05-12 10:34:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 10:35:16 +0000 UTC 2020-05-12 10:34:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 10:35:18.181: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-rdwv5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdwv5/replicasets/test-rollover-deployment-5b8479fdb6,UID:37106d97-943c-11ea-99e8-0242ac110002,ResourceVersion:10146939,Generation:2,CreationTimestamp:2020-05-12 10:34:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34df1405-943c-11ea-99e8-0242ac110002 0xc001933dc7 0xc001933dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 10:35:18.181: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 12 10:35:18.181: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-rdwv5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdwv5/replicasets/test-rollover-controller,UID:2ad25372-943c-11ea-99e8-0242ac110002,ResourceVersion:10146947,Generation:2,CreationTimestamp:2020-05-12 10:34:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34df1405-943c-11ea-99e8-0242ac110002 0xc001933b9f 0xc001933bb0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:35:18.182: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-rdwv5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdwv5/replicasets/test-rollover-deployment-58494b7559,UID:355df027-943c-11ea-99e8-0242ac110002,ResourceVersion:10146893,Generation:2,CreationTimestamp:2020-05-12 10:34:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 34df1405-943c-11ea-99e8-0242ac110002 0xc001933ce7 0xc001933ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:35:18.184: INFO: Pod "test-rollover-deployment-5b8479fdb6-sjhc5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-sjhc5,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-rdwv5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdwv5/pods/test-rollover-deployment-5b8479fdb6-sjhc5,UID:38625216-943c-11ea-99e8-0242ac110002,ResourceVersion:10146916,Generation:0,CreationTimestamp:2020-05-12 10:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 37106d97-943c-11ea-99e8-0242ac110002 0xc0018aefb7 0xc0018aefb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rrbsk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrbsk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rrbsk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018af030} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018af410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:34:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:35:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:35:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:34:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.201,StartTime:2020-05-12 10:34:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 10:35:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://7eca560e20f7c2714cc608bff401f6a01c8e9fc6cf91f5b048dc680eadf3ba16}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:35:18.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rdwv5" for this suite. May 12 10:35:34.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:35:34.663: INFO: namespace: e2e-tests-deployment-rdwv5, resource: bindings, ignored listing per whitelist May 12 10:35:34.674: INFO: namespace e2e-tests-deployment-rdwv5 deletion completed in 16.486706427s • [SLOW TEST:63.454 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:35:34.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-f72g STEP: Creating a pod to test atomic-volume-subpath May 12 10:35:35.751: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f72g" in namespace "e2e-tests-subpath-lh62r" to be "success or failure" May 12 10:35:35.948: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 196.895688ms May 12 10:35:37.983: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231728247s May 12 10:35:40.026: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275642082s May 12 10:35:42.138: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387107735s May 12 10:35:44.434: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683109845s May 12 10:35:46.437: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.686145387s May 12 10:35:48.552: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.801037115s May 12 10:35:50.554: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Pending", Reason="", readiness=false. Elapsed: 14.803450121s May 12 10:35:52.665: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=true. Elapsed: 16.914244403s May 12 10:35:54.670: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 18.918774296s May 12 10:35:56.674: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 20.922908694s May 12 10:35:59.260: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 23.50916261s May 12 10:36:01.263: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 25.511813481s May 12 10:36:03.534: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 27.783300423s May 12 10:36:05.589: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 29.838007959s May 12 10:36:07.593: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Running", Reason="", readiness=false. Elapsed: 31.842307677s May 12 10:36:09.597: INFO: Pod "pod-subpath-test-configmap-f72g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.84569118s STEP: Saw pod success May 12 10:36:09.597: INFO: Pod "pod-subpath-test-configmap-f72g" satisfied condition "success or failure" May 12 10:36:09.599: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-f72g container test-container-subpath-configmap-f72g: STEP: delete the pod May 12 10:36:10.222: INFO: Waiting for pod pod-subpath-test-configmap-f72g to disappear May 12 10:36:10.225: INFO: Pod pod-subpath-test-configmap-f72g no longer exists STEP: Deleting pod pod-subpath-test-configmap-f72g May 12 10:36:10.225: INFO: Deleting pod "pod-subpath-test-configmap-f72g" in namespace "e2e-tests-subpath-lh62r" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:36:10.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lh62r" for this suite. May 12 10:36:18.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:36:18.694: INFO: namespace: e2e-tests-subpath-lh62r, resource: bindings, ignored listing per whitelist May 12 10:36:18.696: INFO: namespace e2e-tests-subpath-lh62r deletion completed in 8.202702781s • [SLOW TEST:44.022 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:36:18.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:36:18.905: INFO: Creating ReplicaSet my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c May 12 10:36:18.918: INFO: Pod name my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c: Found 0 pods out of 1 May 12 10:36:23.923: INFO: Pod name my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c: Found 1 pods out of 1 May 12 10:36:23.923: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c" is running May 12 10:36:25.934: INFO: Pod "my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c-56w6b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:36:18 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:36:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:36:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:36:18 +0000 UTC Reason: Message:}]) May 12 10:36:25.935: INFO: Trying to dial the pod May 12 10:36:30.947: INFO: Controller my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c: Got expected result from replica 1 [my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c-56w6b]: "my-hostname-basic-6a3563f9-943c-11ea-92b2-0242ac11001c-56w6b", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:36:30.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-n4hlg" for this suite. May 12 10:36:43.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:36:43.093: INFO: namespace: e2e-tests-replicaset-n4hlg, resource: bindings, ignored listing per whitelist May 12 10:36:43.097: INFO: namespace e2e-tests-replicaset-n4hlg deletion completed in 12.146362042s • [SLOW TEST:24.400 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:36:43.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 10:36:43.627: INFO: Waiting up to 5m0s for pod "pod-78e58c14-943c-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-zmljb" to be "success or failure" May 12 10:36:44.177: INFO: Pod "pod-78e58c14-943c-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 549.864396ms May 12 10:36:46.180: INFO: Pod "pod-78e58c14-943c-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552893457s May 12 10:36:48.216: INFO: Pod "pod-78e58c14-943c-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.589788438s May 12 10:36:50.219: INFO: Pod "pod-78e58c14-943c-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.592384496s STEP: Saw pod success May 12 10:36:50.219: INFO: Pod "pod-78e58c14-943c-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:36:50.221: INFO: Trying to get logs from node hunter-worker2 pod pod-78e58c14-943c-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:36:50.410: INFO: Waiting for pod pod-78e58c14-943c-11ea-92b2-0242ac11001c to disappear May 12 10:36:50.642: INFO: Pod pod-78e58c14-943c-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:36:50.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zmljb" for this suite. May 12 10:36:57.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:36:57.606: INFO: namespace: e2e-tests-emptydir-zmljb, resource: bindings, ignored listing per whitelist May 12 10:36:57.608: INFO: namespace e2e-tests-emptydir-zmljb deletion completed in 6.961360542s • [SLOW TEST:14.511 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:36:57.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 10:36:58.228: INFO: Waiting up to 5m0s for pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-vz66f" to be "success or failure" May 12 10:36:58.250: INFO: Pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.026431ms May 12 10:37:00.408: INFO: Pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180540968s May 12 10:37:02.412: INFO: Pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184642687s May 12 10:37:04.417: INFO: Pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189520741s May 12 10:37:06.420: INFO: Pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.192616096s STEP: Saw pod success May 12 10:37:06.420: INFO: Pod "downward-api-81a325c3-943c-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:37:06.423: INFO: Trying to get logs from node hunter-worker pod downward-api-81a325c3-943c-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 10:37:06.445: INFO: Waiting for pod downward-api-81a325c3-943c-11ea-92b2-0242ac11001c to disappear May 12 10:37:06.450: INFO: Pod downward-api-81a325c3-943c-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:37:06.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vz66f" for this suite. May 12 10:37:12.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:37:12.554: INFO: namespace: e2e-tests-downward-api-vz66f, resource: bindings, ignored listing per whitelist May 12 10:37:12.560: INFO: namespace e2e-tests-downward-api-vz66f deletion completed in 6.107094204s • [SLOW TEST:14.952 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:37:12.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 12 10:37:12.667: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 12 10:37:12.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:22.363: INFO: stderr: "" May 12 10:37:22.363: INFO: stdout: "service/redis-slave created\n" May 12 10:37:22.363: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 12 10:37:22.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:22.797: INFO: stderr: "" May 12 10:37:22.797: INFO: stdout: "service/redis-master created\n" May 12 10:37:22.797: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 10:37:22.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:23.125: INFO: stderr: "" May 12 10:37:23.125: INFO: stdout: "service/frontend created\n" May 12 10:37:23.126: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 12 10:37:23.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:23.517: INFO: stderr: "" May 12 10:37:23.517: INFO: stdout: "deployment.extensions/frontend created\n" May 12 10:37:23.517: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 10:37:23.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:24.117: INFO: stderr: "" May 12 10:37:24.117: INFO: stdout: "deployment.extensions/redis-master created\n" May 12 10:37:24.118: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 12 10:37:24.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:25.706: INFO: stderr: "" May 12 10:37:25.706: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 12 10:37:25.706: INFO: Waiting for all frontend pods to be Running. May 12 10:37:45.757: INFO: Waiting for frontend to serve content. May 12 10:37:47.054: INFO: Trying to add a new entry to the guestbook. May 12 10:37:47.107: INFO: Verifying that added entry can be retrieved. May 12 10:37:47.117: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources May 12 10:37:52.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:53.238: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:37:53.238: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 12 10:37:53.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:54.804: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:37:54.804: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 10:37:54.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:55.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:37:55.316: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 10:37:55.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:55.836: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:37:55.836: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 10:37:55.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:56.799: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:37:56.800: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 10:37:56.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpkd6' May 12 10:37:58.393: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:37:58.393: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:37:58.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bpkd6" for this suite. May 12 10:38:43.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:38:43.481: INFO: namespace: e2e-tests-kubectl-bpkd6, resource: bindings, ignored listing per whitelist May 12 10:38:43.505: INFO: namespace e2e-tests-kubectl-bpkd6 deletion completed in 44.609492909s • [SLOW TEST:90.945 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:38:43.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 12 10:38:43.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:44.075: INFO: stderr: "" May 12 10:38:44.075: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:38:44.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:44.260: INFO: stderr: "" May 12 10:38:44.260: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 12 10:38:49.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:49.363: INFO: stderr: "" May 12 10:38:49.363: INFO: stdout: "update-demo-nautilus-g562l update-demo-nautilus-x25ps " May 12 10:38:49.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:49.495: INFO: stderr: "" May 12 10:38:49.495: INFO: stdout: "" May 12 10:38:49.495: INFO: update-demo-nautilus-g562l is created but not running May 12 10:38:54.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:54.955: INFO: stderr: "" May 12 10:38:54.955: INFO: stdout: "update-demo-nautilus-g562l update-demo-nautilus-x25ps " May 12 10:38:54.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:55.247: INFO: stderr: "" May 12 10:38:55.247: INFO: stdout: "true" May 12 10:38:55.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:55.520: INFO: stderr: "" May 12 10:38:55.520: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:38:55.520: INFO: validating pod update-demo-nautilus-g562l May 12 10:38:55.527: INFO: got data: { "image": "nautilus.jpg" } May 12 10:38:55.527: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:38:55.527: INFO: update-demo-nautilus-g562l is verified up and running May 12 10:38:55.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x25ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:55.674: INFO: stderr: "" May 12 10:38:55.674: INFO: stdout: "true" May 12 10:38:55.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x25ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:55.767: INFO: stderr: "" May 12 10:38:55.767: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:38:55.767: INFO: validating pod update-demo-nautilus-x25ps May 12 10:38:55.770: INFO: got data: { "image": "nautilus.jpg" } May 12 10:38:55.770: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:38:55.770: INFO: update-demo-nautilus-x25ps is verified up and running STEP: scaling down the replication controller May 12 10:38:55.772: INFO: scanned /root for discovery docs: May 12 10:38:55.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:57.028: INFO: stderr: "" May 12 10:38:57.028: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:38:57.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:38:57.220: INFO: stderr: "" May 12 10:38:57.220: INFO: stdout: "update-demo-nautilus-g562l update-demo-nautilus-x25ps " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 10:39:02.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:02.325: INFO: stderr: "" May 12 10:39:02.325: INFO: stdout: "update-demo-nautilus-g562l " May 12 10:39:02.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:02.431: INFO: stderr: "" May 12 10:39:02.431: INFO: stdout: "true" May 12 10:39:02.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:02.522: INFO: stderr: "" May 12 10:39:02.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:39:02.522: INFO: validating pod update-demo-nautilus-g562l May 12 10:39:02.525: INFO: got data: { "image": "nautilus.jpg" } May 12 10:39:02.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:39:02.525: INFO: update-demo-nautilus-g562l is verified up and running STEP: scaling up the replication controller May 12 10:39:02.527: INFO: scanned /root for discovery docs: May 12 10:39:02.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:03.695: INFO: stderr: "" May 12 10:39:03.695: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:39:03.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:03.788: INFO: stderr: "" May 12 10:39:03.788: INFO: stdout: "update-demo-nautilus-9cmff update-demo-nautilus-g562l " May 12 10:39:03.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cmff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:03.887: INFO: stderr: "" May 12 10:39:03.887: INFO: stdout: "" May 12 10:39:03.887: INFO: update-demo-nautilus-9cmff is created but not running May 12 10:39:08.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:09.367: INFO: stderr: "" May 12 10:39:09.367: INFO: stdout: "update-demo-nautilus-9cmff update-demo-nautilus-g562l " May 12 10:39:09.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cmff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:09.776: INFO: stderr: "" May 12 10:39:09.776: INFO: stdout: "true" May 12 10:39:09.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9cmff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:09.902: INFO: stderr: "" May 12 10:39:09.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:39:09.902: INFO: validating pod update-demo-nautilus-9cmff May 12 10:39:09.905: INFO: got data: { "image": "nautilus.jpg" } May 12 10:39:09.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:39:09.905: INFO: update-demo-nautilus-9cmff is verified up and running May 12 10:39:09.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:10.048: INFO: stderr: "" May 12 10:39:10.048: INFO: stdout: "true" May 12 10:39:10.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g562l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:10.156: INFO: stderr: "" May 12 10:39:10.156: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:39:10.156: INFO: validating pod update-demo-nautilus-g562l May 12 10:39:10.159: INFO: got data: { "image": "nautilus.jpg" } May 12 10:39:10.159: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:39:10.159: INFO: update-demo-nautilus-g562l is verified up and running STEP: using delete to clean up resources May 12 10:39:10.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:10.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:39:10.278: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 10:39:10.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-vlrt5' May 12 10:39:10.433: INFO: stderr: "No resources found.\n" May 12 10:39:10.433: INFO: stdout: "" May 12 10:39:10.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-vlrt5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:39:10.729: INFO: stderr: "" May 12 10:39:10.729: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:39:10.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vlrt5" for this suite. May 12 10:39:34.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:39:34.851: INFO: namespace: e2e-tests-kubectl-vlrt5, resource: bindings, ignored listing per whitelist May 12 10:39:34.873: INFO: namespace e2e-tests-kubectl-vlrt5 deletion completed in 24.139523867s • [SLOW TEST:51.367 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:39:34.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 12 10:39:43.442: INFO: Pod pod-hostip-df28e3fb-943c-11ea-92b2-0242ac11001c has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:39:43.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-25hbr" for this suite. May 12 10:40:07.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:40:07.693: INFO: namespace: e2e-tests-pods-25hbr, resource: bindings, ignored listing per whitelist May 12 10:40:07.698: INFO: namespace e2e-tests-pods-25hbr deletion completed in 24.25390825s • [SLOW TEST:32.826 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:40:07.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:40:07.894: INFO: Creating deployment "nginx-deployment" May 12 10:40:07.898: INFO: Waiting for observed generation 1 May 12 10:40:10.436: INFO: Waiting for all required pods to come up May 12 10:40:10.658: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 10:40:25.500: INFO: Waiting for deployment "nginx-deployment" to complete May 12 10:40:25.503: INFO: Updating deployment "nginx-deployment" with a non-existent image May 12 10:40:25.508: INFO: Updating deployment nginx-deployment May 12 10:40:25.508: INFO: Waiting for observed generation 2 May 12 10:40:28.381: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 10:40:31.534: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 10:40:31.978: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 10:40:32.226: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 10:40:32.226: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 10:40:32.228: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 10:40:32.233: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 12 10:40:32.233: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 12 10:40:32.238: INFO: Updating deployment nginx-deployment May 12 10:40:32.238: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 12 10:40:32.525: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 10:40:33.472: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 10:40:34.255: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-csw2v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-csw2v/deployments/nginx-deployment,UID:f2b28627-943c-11ea-99e8-0242ac110002,ResourceVersion:10148132,Generation:3,CreationTimestamp:2020-05-12 10:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Available True 2020-05-12 10:40:23 +0000 UTC 2020-05-12 10:40:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 10:40:29 +0000 UTC 2020-05-12 10:40:07 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 12 10:40:35.083: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-csw2v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-csw2v/replicasets/nginx-deployment-5c98f8fb5,UID:fd32460d-943c-11ea-99e8-0242ac110002,ResourceVersion:10148139,Generation:3,CreationTimestamp:2020-05-12 10:40:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f2b28627-943c-11ea-99e8-0242ac110002 0xc001793527 0xc001793528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:40:35.083: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 12 10:40:35.083: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-csw2v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-csw2v/replicasets/nginx-deployment-85ddf47c5d,UID:f2b5de07-943c-11ea-99e8-0242ac110002,ResourceVersion:10148133,Generation:3,CreationTimestamp:2020-05-12 10:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f2b28627-943c-11ea-99e8-0242ac110002 0xc0017935e7 0xc0017935e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 12 10:40:36.783: INFO: Pod "nginx-deployment-5c98f8fb5-2css8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2css8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-2css8,UID:02e81d4d-943d-11ea-99e8-0242ac110002,ResourceVersion:10148183,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001793f57 0xc001793f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001793fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001793fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.784: INFO: Pod "nginx-deployment-5c98f8fb5-2s5bn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2s5bn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-2s5bn,UID:fd4ad804-943c-11ea-99e8-0242ac110002,ResourceVersion:10148122,Generation:0,CreationTimestamp:2020-05-12 10:40:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab4040 0xc001ab4041}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab40c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab40e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-12 10:40:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.784: INFO: Pod "nginx-deployment-5c98f8fb5-bmmzl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bmmzl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-bmmzl,UID:02e8245c-943d-11ea-99e8-0242ac110002,ResourceVersion:10148184,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab41a7 0xc001ab41a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab4210} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab4230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.784: INFO: Pod "nginx-deployment-5c98f8fb5-dk7jb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dk7jb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-dk7jb,UID:fd4ac0e7-943c-11ea-99e8-0242ac110002,ResourceVersion:10148094,Generation:0,CreationTimestamp:2020-05-12 10:40:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab4290 0xc001ab4291}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab4310} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab4330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-12 10:40:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.784: INFO: Pod "nginx-deployment-5c98f8fb5-dmdvq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dmdvq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-dmdvq,UID:0296ac2f-943d-11ea-99e8-0242ac110002,ResourceVersion:10148168,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab4407 0xc001ab4408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab4740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab4760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.784: INFO: Pod "nginx-deployment-5c98f8fb5-g6r99" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g6r99,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-g6r99,UID:0268d206-943d-11ea-99e8-0242ac110002,ResourceVersion:10148155,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab47d7 0xc001ab47d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab4890} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab48b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.785: INFO: Pod "nginx-deployment-5c98f8fb5-ggxbt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ggxbt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-ggxbt,UID:0296b369-943d-11ea-99e8-0242ac110002,ResourceVersion:10148169,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab49a7 0xc001ab49a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab4a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab4a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.785: INFO: Pod "nginx-deployment-5c98f8fb5-p29l8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p29l8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-p29l8,UID:02e80d05-943d-11ea-99e8-0242ac110002,ResourceVersion:10148181,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab4e17 0xc001ab4e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab4e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab4ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.785: INFO: Pod "nginx-deployment-5c98f8fb5-pjfjd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pjfjd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-pjfjd,UID:fd397c36-943c-11ea-99e8-0242ac110002,ResourceVersion:10148144,Generation:0,CreationTimestamp:2020-05-12 10:40:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab4f00 0xc001ab4f01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab5040} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab5060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.40,StartTime:2020-05-12 10:40:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.786: INFO: Pod "nginx-deployment-5c98f8fb5-qv9sk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qv9sk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-qv9sk,UID:02e81686-943d-11ea-99e8-0242ac110002,ResourceVersion:10148182,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab51b7 0xc001ab51b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab52d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab52f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.786: INFO: Pod "nginx-deployment-5c98f8fb5-vtttl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vtttl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-vtttl,UID:fe2cd1eb-943c-11ea-99e8-0242ac110002,ResourceVersion:10148121,Generation:0,CreationTimestamp:2020-05-12 10:40:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab5350 0xc001ab5351}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab53d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab53f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-12 10:40:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.786: INFO: Pod "nginx-deployment-5c98f8fb5-xl8bw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xl8bw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-5c98f8fb5-xl8bw,UID:fe4e2f28-943c-11ea-99e8-0242ac110002,ResourceVersion:10148126,Generation:0,CreationTimestamp:2020-05-12 10:40:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fd32460d-943c-11ea-99e8-0242ac110002 0xc001ab54b7 0xc001ab54b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab5530} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab5550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-12 10:40:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.786: INFO: Pod "nginx-deployment-85ddf47c5d-2wn4n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2wn4n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-2wn4n,UID:0268e03d-943d-11ea-99e8-0242ac110002,ResourceVersion:10148157,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc001ab5617 0xc001ab5618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab56a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab56d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.786: INFO: Pod "nginx-deployment-85ddf47c5d-47p95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-47p95,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-47p95,UID:0296d30b-943d-11ea-99e8-0242ac110002,ResourceVersion:10148180,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc001ab5747 0xc001ab5748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab57c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab57f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-54ts7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-54ts7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-54ts7,UID:f2c138f5-943c-11ea-99e8-0242ac110002,ResourceVersion:10148003,Generation:0,CreationTimestamp:2020-05-12 10:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc001ab5867 0xc001ab5868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab58e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab5900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.35,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c418b42f4715babad14c63dbe274a301b914c57d569a6ea16522a7b8d1c637ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-7gf46" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7gf46,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-7gf46,UID:f2c1b921-943c-11ea-99e8-0242ac110002,ResourceVersion:10148049,Generation:0,CreationTimestamp:2020-05-12 10:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc001ab59e7 0xc001ab59e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab5d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab5d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.39,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://18825796287412fd30427066731b45fe012308a9a825a5d37f89638d328b3d62}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-97x4k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-97x4k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-97x4k,UID:0296d032-943d-11ea-99e8-0242ac110002,ResourceVersion:10148177,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc001ab5e07 0xc001ab5e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab5eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab5ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-bqhbf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqhbf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-bqhbf,UID:f2cc22fd-943c-11ea-99e8-0242ac110002,ResourceVersion:10148032,Generation:0,CreationTimestamp:2020-05-12 10:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc001ab5f47 0xc001ab5f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ab5fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ab5fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.36,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://737832ed1a77bcd09fe2b571418347e8441bc93e460bbfc7a15c1c455b1ca65f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-dzftl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dzftl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-dzftl,UID:02690775-943d-11ea-99e8-0242ac110002,ResourceVersion:10148159,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce0d7 0xc0016ce0d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-fs29r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fs29r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-fs29r,UID:f2dba2b0-943c-11ea-99e8-0242ac110002,ResourceVersion:10148044,Generation:0,CreationTimestamp:2020-05-12 10:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce217 0xc0016ce218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.38,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a1e99e3fd5b06a4d65a02641895f2b7c5a35edb6d6ba07acd22d5e8a8d8acf24}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-gb9k6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gb9k6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-gb9k6,UID:02e7c104-943d-11ea-99e8-0242ac110002,ResourceVersion:10148171,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce427 0xc0016ce428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce490} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.787: INFO: Pod "nginx-deployment-85ddf47c5d-h5vcb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h5vcb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-h5vcb,UID:01605840-943d-11ea-99e8-0242ac110002,ResourceVersion:10148163,Generation:0,CreationTimestamp:2020-05-12 10:40:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce570 0xc0016ce571}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-12 10:40:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-j5s9l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j5s9l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-j5s9l,UID:f2cc21bd-943c-11ea-99e8-0242ac110002,ResourceVersion:10148056,Generation:0,CreationTimestamp:2020-05-12 10:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce6b7 0xc0016ce6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce730} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.211,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5a08b897a4ee52a6383cd935c1e0a5203ed729e64d663d5973c0e4e05f5dfdda}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-k58lz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k58lz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-k58lz,UID:02e7d2db-943d-11ea-99e8-0242ac110002,ResourceVersion:10148176,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce817 0xc0016ce818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-ndjhg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ndjhg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-ndjhg,UID:0296d4d1-943d-11ea-99e8-0242ac110002,ResourceVersion:10148173,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ce900 0xc0016ce901}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ce970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ce990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-rchc2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rchc2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-rchc2,UID:02e7e820-943d-11ea-99e8-0242ac110002,ResourceVersion:10148178,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016cea07 0xc0016cea08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016cea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016cea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-trmng" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-trmng,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-trmng,UID:02e7f7ea-943d-11ea-99e8-0242ac110002,ResourceVersion:10148179,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ceaf0 0xc0016ceaf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ceb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016ceb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-vs7gt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vs7gt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-vs7gt,UID:f2cc20c0-943c-11ea-99e8-0242ac110002,ResourceVersion:10148042,Generation:0,CreationTimestamp:2020-05-12 10:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016cebd0 0xc0016cebd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016cec40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016cec60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.37,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://536fd611464a539169a8ae40b955395336dbbd55102bf21d8a1b9361ddba52be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-wkwqt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wkwqt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-wkwqt,UID:f2cc223c-943c-11ea-99e8-0242ac110002,ResourceVersion:10148040,Generation:0,CreationTimestamp:2020-05-12 10:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016ced37 0xc0016ced38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016cedb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016cedd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.210,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://88caf448ec6fb234c9cf6bb92f00f916d5865ffd0512e88f267c6fb4608d2e8e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-wvww8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wvww8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-wvww8,UID:02e7d1d0-943d-11ea-99e8-0242ac110002,ResourceVersion:10148175,Generation:0,CreationTimestamp:2020-05-12 10:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016cee97 0xc0016cee98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016cef00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016cef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.788: INFO: Pod "nginx-deployment-85ddf47c5d-xpnhf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xpnhf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-xpnhf,UID:0296ca4f-943d-11ea-99e8-0242ac110002,ResourceVersion:10148170,Generation:0,CreationTimestamp:2020-05-12 10:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016cef80 0xc0016cef81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016ceff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016cf010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:40:36.789: INFO: Pod "nginx-deployment-85ddf47c5d-zjgk6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zjgk6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-csw2v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-csw2v/pods/nginx-deployment-85ddf47c5d-zjgk6,UID:f2c1b896-943c-11ea-99e8-0242ac110002,ResourceVersion:10148026,Generation:0,CreationTimestamp:2020-05-12 10:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f2b5de07-943c-11ea-99e8-0242ac110002 0xc0016cf087 0xc0016cf088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gm2bx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gm2bx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gm2bx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016cf100} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016cf120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.209,StartTime:2020-05-12 10:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:40:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f70d0591848ee26057ac131ef2133940c36334748bdd9f14a0e3535f843afb8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:40:36.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-csw2v" for this suite. May 12 10:41:14.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:41:14.501: INFO: namespace: e2e-tests-deployment-csw2v, resource: bindings, ignored listing per whitelist May 12 10:41:14.507: INFO: namespace e2e-tests-deployment-csw2v deletion completed in 36.503523827s • [SLOW TEST:66.808 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:41:14.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 12 10:41:15.903: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-fdgmm" to be "success or failure" May 12 10:41:16.467: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 563.628092ms May 12 10:41:19.113: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.210131353s May 12 10:41:21.358: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.455483037s May 12 10:41:23.363: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.460179189s May 12 10:41:25.367: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.463886932s May 12 10:41:28.473: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.570451548s May 12 10:41:30.476: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.573437871s May 12 10:41:32.480: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.576940335s May 12 10:41:34.483: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 18.579678737s May 12 10:41:36.486: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 20.583298302s May 12 10:41:38.490: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 22.586898138s May 12 10:41:40.493: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 24.590518074s May 12 10:41:42.808: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 26.905063771s May 12 10:41:44.811: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.908123184s STEP: Saw pod success May 12 10:41:44.811: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 12 10:41:44.813: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 12 10:41:44.967: INFO: Waiting for pod pod-host-path-test to disappear May 12 10:41:45.333: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:41:45.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-fdgmm" for this suite. May 12 10:41:51.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:41:51.675: INFO: namespace: e2e-tests-hostpath-fdgmm, resource: bindings, ignored listing per whitelist May 12 10:41:51.790: INFO: namespace e2e-tests-hostpath-fdgmm deletion completed in 6.452930274s • [SLOW TEST:37.283 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:41:51.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:42:51.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rl4gv" for this suite. May 12 10:43:15.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:43:16.058: INFO: namespace: e2e-tests-container-probe-rl4gv, resource: bindings, ignored listing per whitelist May 12 10:43:16.107: INFO: namespace e2e-tests-container-probe-rl4gv deletion completed in 24.18778032s • [SLOW TEST:84.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:43:16.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-63ae6ba5-943d-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 10:43:17.819: INFO: Waiting up to 5m0s for pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-krthf" to be "success or failure" May 12 10:43:17.910: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 91.212633ms May 12 10:43:19.914: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095112005s May 12 10:43:21.917: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097902599s May 12 10:43:24.115: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296114419s May 12 10:43:26.133: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314097621s May 12 10:43:28.136: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.316939991s STEP: Saw pod success May 12 10:43:28.136: INFO: Pod "pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:43:28.138: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 10:43:28.455: INFO: Waiting for pod pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c to disappear May 12 10:43:29.074: INFO: Pod pod-configmaps-63e5a108-943d-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:43:29.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-krthf" for this suite. May 12 10:43:37.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:43:37.466: INFO: namespace: e2e-tests-configmap-krthf, resource: bindings, ignored listing per whitelist May 12 10:43:37.475: INFO: namespace e2e-tests-configmap-krthf deletion completed in 8.190699955s • [SLOW TEST:21.367 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:43:37.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nxxtb [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nxxtb STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nxxtb May 12 10:43:38.164: INFO: Found 0 stateful pods, waiting for 1 May 12 10:43:48.168: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 12 10:43:48.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:43:48.417: INFO: stderr: "I0512 10:43:48.283601 1678 log.go:172] (0xc00086e2c0) (0xc000665540) Create stream\nI0512 10:43:48.283640 1678 log.go:172] (0xc00086e2c0) (0xc000665540) Stream added, broadcasting: 1\nI0512 10:43:48.285706 1678 log.go:172] (0xc00086e2c0) Reply frame received for 1\nI0512 10:43:48.285748 1678 log.go:172] (0xc00086e2c0) (0xc0006b2000) Create stream\nI0512 10:43:48.285764 1678 log.go:172] (0xc00086e2c0) (0xc0006b2000) Stream added, broadcasting: 3\nI0512 10:43:48.286546 1678 log.go:172] (0xc00086e2c0) Reply frame received for 3\nI0512 10:43:48.286566 1678 log.go:172] (0xc00086e2c0) (0xc0006655e0) Create stream\nI0512 10:43:48.286572 1678 log.go:172] (0xc00086e2c0) (0xc0006655e0) Stream added, broadcasting: 5\nI0512 10:43:48.287327 1678 log.go:172] (0xc00086e2c0) Reply frame received for 5\nI0512 10:43:48.410632 1678 log.go:172] (0xc00086e2c0) Data frame received for 3\nI0512 10:43:48.410684 1678 log.go:172] (0xc0006b2000) (3) Data frame handling\nI0512 10:43:48.410702 1678 log.go:172] (0xc0006b2000) (3) Data frame sent\nI0512 10:43:48.410723 1678 log.go:172] (0xc00086e2c0) Data frame received for 3\nI0512 10:43:48.410755 1678 log.go:172] (0xc0006b2000) (3) Data frame handling\nI0512 10:43:48.410820 1678 log.go:172] (0xc00086e2c0) Data frame received for 5\nI0512 10:43:48.410855 1678 log.go:172] (0xc0006655e0) (5) Data frame handling\nI0512 10:43:48.412492 1678 log.go:172] (0xc00086e2c0) Data frame received for 1\nI0512 10:43:48.412524 1678 log.go:172] (0xc000665540) (1) Data frame handling\nI0512 10:43:48.412644 1678 log.go:172] (0xc000665540) (1) Data frame sent\nI0512 10:43:48.412679 1678 log.go:172] (0xc00086e2c0) (0xc000665540) Stream removed, broadcasting: 1\nI0512 10:43:48.412748 1678 log.go:172] (0xc00086e2c0) Go away received\nI0512 10:43:48.413073 1678 log.go:172] (0xc00086e2c0) (0xc000665540) Stream removed, broadcasting: 1\nI0512 10:43:48.413097 1678 log.go:172] (0xc00086e2c0) (0xc0006b2000) Stream removed, broadcasting: 3\nI0512 10:43:48.413292 1678 log.go:172] (0xc00086e2c0) (0xc0006655e0) Stream removed, broadcasting: 5\n" May 12 10:43:48.417: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:43:48.417: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:43:48.421: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 10:43:58.426: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 10:43:58.426: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:43:58.492: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:43:58.492: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:43:58.492: INFO: May 12 10:43:58.492: INFO: StatefulSet ss has not reached scale 3, at 1 May 12 10:43:59.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.940875304s May 12 10:44:00.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.937087596s May 12 10:44:01.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932981726s May 12 10:44:03.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.48636134s May 12 10:44:04.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.382817739s May 12 10:44:05.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944394286s May 12 10:44:06.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.743122902s May 12 10:44:07.769: INFO: Verifying statefulset ss doesn't scale past 3 for another 737.754868ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nxxtb May 12 10:44:08.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:08.978: INFO: stderr: "I0512 10:44:08.907037 1700 log.go:172] (0xc0008322c0) (0xc00071a640) Create stream\nI0512 10:44:08.907101 1700 log.go:172] (0xc0008322c0) (0xc00071a640) Stream added, broadcasting: 1\nI0512 10:44:08.909024 1700 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0512 10:44:08.909057 1700 log.go:172] (0xc0008322c0) (0xc0005d8dc0) Create stream\nI0512 10:44:08.909065 1700 log.go:172] (0xc0008322c0) (0xc0005d8dc0) Stream added, broadcasting: 3\nI0512 10:44:08.909995 1700 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0512 10:44:08.910014 1700 log.go:172] (0xc0008322c0) (0xc00071a6e0) Create stream\nI0512 10:44:08.910021 1700 log.go:172] (0xc0008322c0) (0xc00071a6e0) Stream added, broadcasting: 5\nI0512 10:44:08.910682 1700 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0512 10:44:08.971729 1700 log.go:172] (0xc0008322c0) Data frame received for 5\nI0512 10:44:08.971759 1700 log.go:172] (0xc00071a6e0) (5) Data frame handling\nI0512 10:44:08.971788 1700 log.go:172] (0xc0008322c0) Data frame received for 3\nI0512 10:44:08.971802 1700 log.go:172] (0xc0005d8dc0) (3) Data frame handling\nI0512 10:44:08.971811 1700 log.go:172] (0xc0005d8dc0) (3) Data frame sent\nI0512 10:44:08.971827 1700 log.go:172] (0xc0008322c0) Data frame received for 3\nI0512 10:44:08.971842 1700 log.go:172] (0xc0005d8dc0) (3) Data frame handling\nI0512 10:44:08.973004 1700 log.go:172] (0xc0008322c0) Data frame received for 1\nI0512 10:44:08.973027 1700 log.go:172] (0xc00071a640) (1) Data frame handling\nI0512 10:44:08.973039 1700 log.go:172] (0xc00071a640) (1) Data frame sent\nI0512 10:44:08.973056 1700 log.go:172] (0xc0008322c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0512 10:44:08.973080 1700 log.go:172] (0xc0008322c0) Go away received\nI0512 10:44:08.973402 1700 log.go:172] (0xc0008322c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0512 10:44:08.973426 1700 log.go:172] (0xc0008322c0) (0xc0005d8dc0) Stream removed, broadcasting: 3\nI0512 10:44:08.973435 1700 log.go:172] (0xc0008322c0) (0xc00071a6e0) Stream removed, broadcasting: 5\n" May 12 10:44:08.978: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:44:08.978: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:44:08.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:09.180: INFO: stderr: "I0512 10:44:09.096689 1721 log.go:172] (0xc000138840) (0xc000125400) Create stream\nI0512 10:44:09.096740 1721 log.go:172] (0xc000138840) (0xc000125400) Stream added, broadcasting: 1\nI0512 10:44:09.098484 1721 log.go:172] (0xc000138840) Reply frame received for 1\nI0512 10:44:09.098542 1721 log.go:172] (0xc000138840) (0xc0006f2000) Create stream\nI0512 10:44:09.098559 1721 log.go:172] (0xc000138840) (0xc0006f2000) Stream added, broadcasting: 3\nI0512 10:44:09.099369 1721 log.go:172] (0xc000138840) Reply frame received for 3\nI0512 10:44:09.099404 1721 log.go:172] (0xc000138840) (0xc0006cc000) Create stream\nI0512 10:44:09.099415 1721 log.go:172] (0xc000138840) (0xc0006cc000) Stream added, broadcasting: 5\nI0512 10:44:09.100161 1721 log.go:172] (0xc000138840) Reply frame received for 5\nI0512 10:44:09.175228 1721 log.go:172] (0xc000138840) Data frame received for 3\nI0512 10:44:09.175256 1721 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0512 10:44:09.175268 1721 log.go:172] (0xc0006f2000) (3) Data frame sent\nI0512 10:44:09.175274 1721 log.go:172] (0xc000138840) Data frame received for 3\nI0512 10:44:09.175279 1721 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0512 10:44:09.175305 1721 log.go:172] (0xc000138840) Data frame received for 5\nI0512 10:44:09.175357 1721 log.go:172] (0xc0006cc000) (5) Data frame handling\nI0512 10:44:09.175376 1721 log.go:172] (0xc0006cc000) (5) Data frame sent\nI0512 10:44:09.175387 1721 log.go:172] (0xc000138840) Data frame received for 5\nI0512 10:44:09.175393 1721 log.go:172] (0xc0006cc000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0512 10:44:09.176463 1721 log.go:172] (0xc000138840) Data frame received for 1\nI0512 10:44:09.176488 1721 log.go:172] (0xc000125400) (1) Data frame handling\nI0512 10:44:09.176505 1721 log.go:172] (0xc000125400) (1) Data frame sent\nI0512 10:44:09.176526 1721 log.go:172] (0xc000138840) (0xc000125400) Stream removed, broadcasting: 1\nI0512 10:44:09.176547 1721 log.go:172] (0xc000138840) Go away received\nI0512 10:44:09.176759 1721 log.go:172] (0xc000138840) (0xc000125400) Stream removed, broadcasting: 1\nI0512 10:44:09.176784 1721 log.go:172] (0xc000138840) (0xc0006f2000) Stream removed, broadcasting: 3\nI0512 10:44:09.176807 1721 log.go:172] (0xc000138840) (0xc0006cc000) Stream removed, broadcasting: 5\n" May 12 10:44:09.180: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:44:09.180: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:44:09.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:09.360: INFO: stderr: "I0512 10:44:09.302057 1743 log.go:172] (0xc000138790) (0xc000718640) Create stream\nI0512 10:44:09.302113 1743 log.go:172] (0xc000138790) (0xc000718640) Stream added, broadcasting: 1\nI0512 10:44:09.304161 1743 log.go:172] (0xc000138790) Reply frame received for 1\nI0512 10:44:09.304380 1743 log.go:172] (0xc000138790) (0xc000524dc0) Create stream\nI0512 10:44:09.304411 1743 log.go:172] (0xc000138790) (0xc000524dc0) Stream added, broadcasting: 3\nI0512 10:44:09.305919 1743 log.go:172] (0xc000138790) Reply frame received for 3\nI0512 10:44:09.305953 1743 log.go:172] (0xc000138790) (0xc000760000) Create stream\nI0512 10:44:09.305964 1743 log.go:172] (0xc000138790) (0xc000760000) Stream added, broadcasting: 5\nI0512 10:44:09.306629 1743 log.go:172] (0xc000138790) Reply frame received for 5\nI0512 10:44:09.354148 1743 log.go:172] (0xc000138790) Data frame received for 3\nI0512 10:44:09.354168 1743 log.go:172] (0xc000524dc0) (3) Data frame handling\nI0512 10:44:09.354178 1743 log.go:172] (0xc000524dc0) (3) Data frame sent\nI0512 10:44:09.354183 1743 log.go:172] (0xc000138790) Data frame received for 3\nI0512 10:44:09.354186 1743 log.go:172] (0xc000524dc0) (3) Data frame handling\nI0512 10:44:09.354409 1743 log.go:172] (0xc000138790) Data frame received for 5\nI0512 10:44:09.354432 1743 log.go:172] (0xc000760000) (5) Data frame handling\nI0512 10:44:09.354444 1743 log.go:172] (0xc000760000) (5) Data frame sent\nI0512 10:44:09.354456 1743 log.go:172] (0xc000138790) Data frame received for 5\nI0512 10:44:09.354465 1743 log.go:172] (0xc000760000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0512 10:44:09.356419 1743 log.go:172] (0xc000138790) Data frame received for 1\nI0512 10:44:09.356432 1743 log.go:172] (0xc000718640) (1) Data frame handling\nI0512 10:44:09.356440 1743 log.go:172] (0xc000718640) (1) Data frame sent\nI0512 10:44:09.356448 1743 log.go:172] (0xc000138790) (0xc000718640) Stream removed, broadcasting: 1\nI0512 10:44:09.356457 1743 log.go:172] (0xc000138790) Go away received\nI0512 10:44:09.356591 1743 log.go:172] (0xc000138790) (0xc000718640) Stream removed, broadcasting: 1\nI0512 10:44:09.356612 1743 log.go:172] (0xc000138790) (0xc000524dc0) Stream removed, broadcasting: 3\nI0512 10:44:09.356624 1743 log.go:172] (0xc000138790) (0xc000760000) Stream removed, broadcasting: 5\n" May 12 10:44:09.360: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:44:09.360: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:44:09.364: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:44:09.364: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:44:09.364: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 12 10:44:09.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:44:09.542: INFO: stderr: "I0512 10:44:09.485672 1765 log.go:172] (0xc000720370) (0xc000746640) Create stream\nI0512 10:44:09.485715 1765 log.go:172] (0xc000720370) (0xc000746640) Stream added, broadcasting: 1\nI0512 10:44:09.487459 1765 log.go:172] (0xc000720370) Reply frame received for 1\nI0512 10:44:09.487492 1765 log.go:172] (0xc000720370) (0xc0005badc0) Create stream\nI0512 10:44:09.487503 1765 log.go:172] (0xc000720370) (0xc0005badc0) Stream added, broadcasting: 3\nI0512 10:44:09.488263 1765 log.go:172] (0xc000720370) Reply frame received for 3\nI0512 10:44:09.488284 1765 log.go:172] (0xc000720370) (0xc000352000) Create stream\nI0512 10:44:09.488293 1765 log.go:172] (0xc000720370) (0xc000352000) Stream added, broadcasting: 5\nI0512 10:44:09.488910 1765 log.go:172] (0xc000720370) Reply frame received for 5\nI0512 10:44:09.537730 1765 log.go:172] (0xc000720370) Data frame received for 5\nI0512 10:44:09.537751 1765 log.go:172] (0xc000352000) (5) Data frame handling\nI0512 10:44:09.537765 1765 log.go:172] (0xc000720370) Data frame received for 3\nI0512 10:44:09.537769 1765 log.go:172] (0xc0005badc0) (3) Data frame handling\nI0512 10:44:09.537774 1765 log.go:172] (0xc0005badc0) (3) Data frame sent\nI0512 10:44:09.537778 1765 log.go:172] (0xc000720370) Data frame received for 3\nI0512 10:44:09.537782 1765 log.go:172] (0xc0005badc0) (3) Data frame handling\nI0512 10:44:09.538786 1765 log.go:172] (0xc000720370) Data frame received for 1\nI0512 10:44:09.538801 1765 log.go:172] (0xc000746640) (1) Data frame handling\nI0512 10:44:09.538810 1765 log.go:172] (0xc000746640) (1) Data frame sent\nI0512 10:44:09.538818 1765 log.go:172] (0xc000720370) (0xc000746640) Stream removed, broadcasting: 1\nI0512 10:44:09.538908 1765 log.go:172] (0xc000720370) Go away received\nI0512 10:44:09.538954 1765 log.go:172] (0xc000720370) (0xc000746640) Stream removed, broadcasting: 1\nI0512 10:44:09.538970 1765 log.go:172] (0xc000720370) (0xc0005badc0) Stream removed, broadcasting: 3\nI0512 10:44:09.538980 1765 log.go:172] (0xc000720370) (0xc000352000) Stream removed, broadcasting: 5\n" May 12 10:44:09.542: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:44:09.542: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:44:09.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:44:09.767: INFO: stderr: "I0512 10:44:09.666070 1787 log.go:172] (0xc00083c2c0) (0xc000750640) Create stream\nI0512 10:44:09.666266 1787 log.go:172] (0xc00083c2c0) (0xc000750640) Stream added, broadcasting: 1\nI0512 10:44:09.668597 1787 log.go:172] (0xc00083c2c0) Reply frame received for 1\nI0512 10:44:09.668639 1787 log.go:172] (0xc00083c2c0) (0xc0007506e0) Create stream\nI0512 10:44:09.668657 1787 log.go:172] (0xc00083c2c0) (0xc0007506e0) Stream added, broadcasting: 3\nI0512 10:44:09.669870 1787 log.go:172] (0xc00083c2c0) Reply frame received for 3\nI0512 10:44:09.669934 1787 log.go:172] (0xc00083c2c0) (0xc0005b6c80) Create stream\nI0512 10:44:09.669960 1787 log.go:172] (0xc00083c2c0) (0xc0005b6c80) Stream added, broadcasting: 5\nI0512 10:44:09.670844 1787 log.go:172] (0xc00083c2c0) Reply frame received for 5\nI0512 10:44:09.760094 1787 log.go:172] (0xc00083c2c0) Data frame received for 3\nI0512 10:44:09.760159 1787 log.go:172] (0xc0007506e0) (3) Data frame handling\nI0512 10:44:09.760181 1787 log.go:172] (0xc0007506e0) (3) Data frame sent\nI0512 10:44:09.760198 1787 log.go:172] (0xc00083c2c0) Data frame received for 3\nI0512 10:44:09.760218 1787 log.go:172] (0xc0007506e0) (3) Data frame handling\nI0512 10:44:09.760262 1787 log.go:172] (0xc00083c2c0) Data frame received for 5\nI0512 10:44:09.760289 1787 log.go:172] (0xc0005b6c80) (5) Data frame handling\nI0512 10:44:09.761748 1787 log.go:172] (0xc00083c2c0) Data frame received for 1\nI0512 10:44:09.761768 1787 log.go:172] (0xc000750640) (1) Data frame handling\nI0512 10:44:09.761780 1787 log.go:172] (0xc000750640) (1) Data frame sent\nI0512 10:44:09.761811 1787 log.go:172] (0xc00083c2c0) (0xc000750640) Stream removed, broadcasting: 1\nI0512 10:44:09.761863 1787 log.go:172] (0xc00083c2c0) Go away received\nI0512 10:44:09.761971 1787 log.go:172] (0xc00083c2c0) (0xc000750640) Stream removed, broadcasting: 1\nI0512 10:44:09.761994 1787 log.go:172] (0xc00083c2c0) (0xc0007506e0) Stream removed, broadcasting: 3\nI0512 10:44:09.762007 1787 log.go:172] (0xc00083c2c0) (0xc0005b6c80) Stream removed, broadcasting: 5\n" May 12 10:44:09.767: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:44:09.767: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:44:09.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:44:10.021: INFO: stderr: "I0512 10:44:09.893412 1810 log.go:172] (0xc00014c630) (0xc000732640) Create stream\nI0512 10:44:09.893474 1810 log.go:172] (0xc00014c630) (0xc000732640) Stream added, broadcasting: 1\nI0512 10:44:09.895173 1810 log.go:172] (0xc00014c630) Reply frame received for 1\nI0512 10:44:09.895206 1810 log.go:172] (0xc00014c630) (0xc000636c80) Create stream\nI0512 10:44:09.895217 1810 log.go:172] (0xc00014c630) (0xc000636c80) Stream added, broadcasting: 3\nI0512 10:44:09.895841 1810 log.go:172] (0xc00014c630) Reply frame received for 3\nI0512 10:44:09.895879 1810 log.go:172] (0xc00014c630) (0xc0000e2000) Create stream\nI0512 10:44:09.895896 1810 log.go:172] (0xc00014c630) (0xc0000e2000) Stream added, broadcasting: 5\nI0512 10:44:09.896553 1810 log.go:172] (0xc00014c630) Reply frame received for 5\nI0512 10:44:10.014280 1810 log.go:172] (0xc00014c630) Data frame received for 3\nI0512 10:44:10.014331 1810 log.go:172] (0xc000636c80) (3) Data frame handling\nI0512 10:44:10.014368 1810 log.go:172] (0xc000636c80) (3) Data frame sent\nI0512 10:44:10.014387 1810 log.go:172] (0xc00014c630) Data frame received for 3\nI0512 10:44:10.014403 1810 log.go:172] (0xc000636c80) (3) Data frame handling\nI0512 10:44:10.014516 1810 log.go:172] (0xc00014c630) Data frame received for 5\nI0512 10:44:10.014535 1810 log.go:172] (0xc0000e2000) (5) Data frame handling\nI0512 10:44:10.016816 1810 log.go:172] (0xc00014c630) Data frame received for 1\nI0512 10:44:10.016828 1810 log.go:172] (0xc000732640) (1) Data frame handling\nI0512 10:44:10.016835 1810 log.go:172] (0xc000732640) (1) Data frame sent\nI0512 10:44:10.016843 1810 log.go:172] (0xc00014c630) (0xc000732640) Stream removed, broadcasting: 1\nI0512 10:44:10.017011 1810 log.go:172] (0xc00014c630) (0xc000732640) Stream removed, broadcasting: 1\nI0512 10:44:10.017025 1810 log.go:172] (0xc00014c630) (0xc000636c80) Stream removed, broadcasting: 3\nI0512 10:44:10.017210 1810 log.go:172] (0xc00014c630) (0xc0000e2000) Stream removed, broadcasting: 5\n" May 12 10:44:10.022: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:44:10.022: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:44:10.022: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:44:10.092: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 10:44:20.099: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 10:44:20.099: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 10:44:20.099: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 10:44:20.151: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:20.151: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:20.151: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:20.151: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:20.151: INFO: May 12 10:44:20.151: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 10:44:21.274: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:21.274: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:21.274: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:21.274: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:21.274: INFO: May 12 10:44:21.274: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 10:44:22.380: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:22.380: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:22.380: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:22.380: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:22.380: INFO: May 12 10:44:22.380: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 10:44:23.422: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:23.422: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:23.422: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:23.422: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:23.422: INFO: May 12 10:44:23.422: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 10:44:24.920: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:24.920: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:24.920: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:24.920: INFO: May 12 10:44:24.920: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 10:44:25.924: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:25.924: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:25.924: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:25.924: INFO: May 12 10:44:25.924: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 10:44:26.927: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:26.927: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:26.927: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:26.927: INFO: May 12 10:44:26.927: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 10:44:27.930: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:27.930: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:27.930: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:27.930: INFO: May 12 10:44:27.930: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 10:44:28.933: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:28.933: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:28.933: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:28.933: INFO: May 12 10:44:28.933: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 10:44:29.936: INFO: POD NODE PHASE GRACE CONDITIONS May 12 10:44:29.936: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:38 +0000 UTC }] May 12 10:44:29.936: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:44:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:43:58 +0000 UTC }] May 12 10:44:29.936: INFO: May 12 10:44:29.936: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nxxtb May 12 10:44:30.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:31.066: INFO: rc: 1 May 12 10:44:31.066: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001526060 exit status 1 true [0xc000a407c8 0xc000a407e0 0xc000a407f8] [0xc000a407c8 0xc000a407e0 0xc000a407f8] [0xc000a407d8 0xc000a407f0] [0x935700 0x935700] 0xc001e3d020 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 12 10:44:41.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:41.186: INFO: rc: 1 May 12 10:44:41.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001526180 exit status 1 true [0xc000a40800 0xc000a40818 0xc000a40830] [0xc000a40800 0xc000a40818 0xc000a40830] [0xc000a40810 0xc000a40828] [0x935700 0x935700] 0xc001e3d320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:44:51.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:51.337: INFO: rc: 1 May 12 10:44:51.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ec4300 exit status 1 true [0xc000e54580 0xc000e54598 0xc000e545b0] [0xc000e54580 0xc000e54598 0xc000e545b0] [0xc000e54590 0xc000e545a8] [0x935700 0x935700] 0xc001c23260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:45:01.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:01.408: INFO: rc: 1 May 12 10:45:01.408: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8120 exit status 1 true [0xc000ca8000 0xc000ca8018 0xc000ca8030] [0xc000ca8000 0xc000ca8018 0xc000ca8030] [0xc000ca8010 0xc000ca8028] [0x935700 0x935700] 0xc0018ca720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:45:11.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:11.499: INFO: rc: 1 May 12 10:45:11.499: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021ae120 exit status 1 true [0xc0014fc000 0xc0014fc018 0xc0014fc030] [0xc0014fc000 0xc0014fc018 0xc0014fc030] [0xc0014fc010 0xc0014fc028] [0x935700 0x935700] 0xc001d65920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:45:21.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:21.593: INFO: rc: 1 May 12 10:45:21.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023c2120 exit status 1 true [0xc000e54008 0xc000e54020 0xc000e54038] [0xc000e54008 0xc000e54020 0xc000e54038] [0xc000e54018 0xc000e54030] [0x935700 0x935700] 0xc001878360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:45:31.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:31.676: INFO: rc: 1 May 12 10:45:31.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018802a0 exit status 1 true [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6010 0xc0015c6028] [0x935700 0x935700] 0xc001edaa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:45:41.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:41.762: INFO: rc: 1 May 12 10:45:41.762: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018804e0 exit status 1 true [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6048 0xc0015c6060] [0x935700 0x935700] 0xc001edad20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:45:51.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:51.858: INFO: rc: 1 May 12 10:45:51.858: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021ae270 exit status 1 true [0xc0014fc038 0xc0014fc050 0xc0014fc068] [0xc0014fc038 0xc0014fc050 0xc0014fc068] [0xc0014fc048 0xc0014fc060] [0x935700 0x935700] 0xc001d65bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:46:01.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:46:01.952: INFO: rc: 1 May 12 10:46:01.952: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021ae390 exit status 1 true [0xc0014fc070 0xc0014fc088 0xc0014fc0a0] [0xc0014fc070 0xc0014fc088 0xc0014fc0a0] [0xc0014fc080 0xc0014fc098] [0x935700 0x935700] 0xc001d65e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:46:11.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:46:12.051: INFO: rc: 1 May 12 10:46:12.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b82d0 exit status 1 true [0xc000ca8038 0xc000ca8050 0xc000ca8068] [0xc000ca8038 0xc000ca8050 0xc000ca8068] [0xc000ca8048 0xc000ca8060] [0x935700 0x935700] 0xc0018cb680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:46:22.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:46:22.135: INFO: rc: 1 May 12 10:46:22.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001880690 exit status 1 true [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6080 0xc0015c6098] [0x935700 0x935700] 0xc001edafc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:46:32.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:46:32.363: INFO: rc: 1 May 12 10:46:32.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8420 exit status 1 true [0xc000ca8070 0xc000ca8088 0xc000ca80a0] [0xc000ca8070 0xc000ca8088 0xc000ca80a0] [0xc000ca8080 0xc000ca8098] [0x935700 0x935700] 0xc001ce21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:46:42.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:46:42.943: INFO: rc: 1 May 12 10:46:42.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023c22d0 exit status 1 true [0xc000e54040 0xc000e54058 0xc000e54070] [0xc000e54040 0xc000e54058 0xc000e54070] [0xc000e54050 0xc000e54068] [0x935700 0x935700] 0xc001878600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:46:52.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:46:53.029: INFO: rc: 1 May 12 10:46:53.029: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016a2240 exit status 1 true [0xc001324018 0xc001324068 0xc001324138] [0xc001324018 0xc001324068 0xc001324138] [0xc001324060 0xc0013240c0] [0x935700 0x935700] 0xc0018182a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:47:03.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:47:03.114: INFO: rc: 1 May 12 10:47:03.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001880330 exit status 1 true [0xc001324158 0xc001324240 0xc001324290] [0xc001324158 0xc001324240 0xc001324290] [0xc0013241f0 0xc001324280] [0x935700 0x935700] 0xc0018ca720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:47:13.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:47:13.202: INFO: rc: 1 May 12 10:47:13.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016a2390 exit status 1 true [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6010 0xc0015c6028] [0x935700 0x935700] 0xc001818660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:47:23.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:47:23.305: INFO: rc: 1 May 12 10:47:23.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b81b0 exit status 1 true [0xc000ca8000 0xc000ca8018 0xc000ca8030] [0xc000ca8000 0xc000ca8018 0xc000ca8030] [0xc000ca8010 0xc000ca8028] [0x935700 0x935700] 0xc001edaa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:47:33.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:47:34.023: INFO: rc: 1 May 12 10:47:34.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023c2150 exit status 1 true [0xc000e54008 0xc000e54020 0xc000e54038] [0xc000e54008 0xc000e54020 0xc000e54038] [0xc000e54018 0xc000e54030] [0x935700 0x935700] 0xc001ce2240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:47:44.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:47:44.547: INFO: rc: 1 May 12 10:47:44.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016a24e0 exit status 1 true [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6048 0xc0015c6060] [0x935700 0x935700] 0xc0018189c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:47:54.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:47:54.657: INFO: rc: 1 May 12 10:47:54.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016a2630 exit status 1 true [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6080 0xc0015c6098] [0x935700 0x935700] 0xc001818c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:48:04.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:48:04.831: INFO: rc: 1 May 12 10:48:04.831: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023c2390 exit status 1 true [0xc000e54040 0xc000e54058 0xc000e54070] [0xc000e54040 0xc000e54058 0xc000e54070] [0xc000e54050 0xc000e54068] [0x935700 0x935700] 0xc001ce3260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:48:14.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:48:14.928: INFO: rc: 1 May 12 10:48:14.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016a27b0 exit status 1 true [0xc0015c60a8 0xc0015c60c0 0xc0015c60e0] [0xc0015c60a8 0xc0015c60c0 0xc0015c60e0] [0xc0015c60b8 0xc0015c60d8] [0x935700 0x935700] 0xc001819020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:48:24.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:48:25.024: INFO: rc: 1 May 12 10:48:25.024: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001880570 exit status 1 true [0xc0013242a8 0xc0013242d8 0xc001324350] [0xc0013242a8 0xc0013242d8 0xc001324350] [0xc0013242c8 0xc0013242f8] [0x935700 0x935700] 0xc0018cb680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:48:35.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:48:35.397: INFO: rc: 1 May 12 10:48:35.397: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023c2660 exit status 1 true [0xc000e54078 0xc000e54090 0xc000e540a8] [0xc000e54078 0xc000e54090 0xc000e540a8] [0xc000e54088 0xc000e540a0] [0x935700 0x935700] 0xc001ce3500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:48:45.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:48:45.488: INFO: rc: 1 May 12 10:48:45.488: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018806c0 exit status 1 true [0xc001324378 0xc0013243a8 0xc001324400] [0xc001324378 0xc0013243a8 0xc001324400] [0xc0013243a0 0xc0013243e0] [0x935700 0x935700] 0xc001878300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:48:55.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:48:55.569: INFO: rc: 1 May 12 10:48:55.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8120 exit status 1 true [0xc000ca8008 0xc000ca8020 0xc000ca8038] [0xc000ca8008 0xc000ca8020 0xc000ca8038] [0xc000ca8018 0xc000ca8030] [0x935700 0x935700] 0xc0018ca720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:49:05.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:49:05.661: INFO: rc: 1 May 12 10:49:05.661: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8330 exit status 1 true [0xc000ca8040 0xc000ca8058 0xc000ca8070] [0xc000ca8040 0xc000ca8058 0xc000ca8070] [0xc000ca8050 0xc000ca8068] [0x935700 0x935700] 0xc0018cb680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:49:15.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:49:15.774: INFO: rc: 1 May 12 10:49:15.774: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8510 exit status 1 true [0xc000ca8078 0xc000ca8090 0xc000ca80a8] [0xc000ca8078 0xc000ca8090 0xc000ca80a8] [0xc000ca8088 0xc000ca80a0] [0x935700 0x935700] 0xc001edaa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:49:25.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:49:25.851: INFO: rc: 1 May 12 10:49:25.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8660 exit status 1 true [0xc000ca80b0 0xc000ca80c8 0xc000ca80e0] [0xc000ca80b0 0xc000ca80c8 0xc000ca80e0] [0xc000ca80c0 0xc000ca80d8] [0x935700 0x935700] 0xc001edacc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 10:49:35.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxxtb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:49:36.075: INFO: rc: 1 May 12 10:49:36.075: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 12 10:49:36.075: INFO: Scaling statefulset ss to 0 May 12 10:49:36.083: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 10:49:36.086: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nxxtb May 12 10:49:36.088: INFO: Scaling statefulset ss to 0 May 12 10:49:36.096: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:49:36.099: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:49:36.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nxxtb" for this suite. May 12 10:49:46.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:49:46.277: INFO: namespace: e2e-tests-statefulset-nxxtb, resource: bindings, ignored listing per whitelist May 12 10:49:46.318: INFO: namespace e2e-tests-statefulset-nxxtb deletion completed in 10.179333539s • [SLOW TEST:368.843 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:49:46.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:49:46.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-wn2fl" to be "success or failure" May 12 10:49:46.865: INFO: Pod "downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.813957ms May 12 10:49:49.017: INFO: Pod "downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164606058s May 12 10:49:51.020: INFO: Pod "downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167695947s May 12 10:49:53.314: INFO: Pod "downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.460967891s STEP: Saw pod success May 12 10:49:53.314: INFO: Pod "downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:49:53.316: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:49:53.547: INFO: Waiting for pod downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c to disappear May 12 10:49:53.882: INFO: Pod downwardapi-volume-4bc7aeef-943e-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:49:53.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wn2fl" for this suite. May 12 10:50:02.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:50:02.376: INFO: namespace: e2e-tests-downward-api-wn2fl, resource: bindings, ignored listing per whitelist May 12 10:50:02.434: INFO: namespace e2e-tests-downward-api-wn2fl deletion completed in 8.548792775s • [SLOW TEST:16.116 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:50:02.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-55841b92-943e-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-55841b92-943e-11ea-92b2-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:50:09.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7hz66" for this suite. May 12 10:50:33.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:50:33.515: INFO: namespace: e2e-tests-configmap-7hz66, resource: bindings, ignored listing per whitelist May 12 10:50:33.572: INFO: namespace e2e-tests-configmap-7hz66 deletion completed in 24.091803172s • [SLOW TEST:31.137 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:50:33.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-680416d2-943e-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:50:34.437: INFO: Waiting up to 5m0s for pod "pod-secrets-68048924-943e-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-s6wgv" to be "success or failure" May 12 10:50:34.620: INFO: Pod "pod-secrets-68048924-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 183.456273ms May 12 10:50:36.950: INFO: Pod "pod-secrets-68048924-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.513206609s May 12 10:50:38.955: INFO: Pod "pod-secrets-68048924-943e-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.518038396s May 12 10:50:40.959: INFO: Pod "pod-secrets-68048924-943e-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.521999001s STEP: Saw pod success May 12 10:50:40.959: INFO: Pod "pod-secrets-68048924-943e-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:50:40.961: INFO: Trying to get logs from node hunter-worker pod pod-secrets-68048924-943e-11ea-92b2-0242ac11001c container secret-env-test: STEP: delete the pod May 12 10:50:40.982: INFO: Waiting for pod pod-secrets-68048924-943e-11ea-92b2-0242ac11001c to disappear May 12 10:50:40.986: INFO: Pod pod-secrets-68048924-943e-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:50:40.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-s6wgv" for this suite. May 12 10:50:49.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:50:49.068: INFO: namespace: e2e-tests-secrets-s6wgv, resource: bindings, ignored listing per whitelist May 12 10:50:49.086: INFO: namespace e2e-tests-secrets-s6wgv deletion completed in 8.096559094s • [SLOW TEST:15.514 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:50:49.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:50:49.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-94tmr" to be "success or failure" May 12 10:50:49.183: INFO: Pod "downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.951587ms May 12 10:50:51.309: INFO: Pod "downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131026861s May 12 10:50:53.313: INFO: Pod "downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.134767477s May 12 10:50:55.318: INFO: Pod "downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140307034s STEP: Saw pod success May 12 10:50:55.319: INFO: Pod "downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:50:55.322: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:50:55.345: INFO: Waiting for pod downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c to disappear May 12 10:50:55.368: INFO: Pod downwardapi-volume-70edb1e4-943e-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:50:55.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-94tmr" for this suite. May 12 10:51:01.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:51:01.461: INFO: namespace: e2e-tests-projected-94tmr, resource: bindings, ignored listing per whitelist May 12 10:51:01.466: INFO: namespace e2e-tests-projected-94tmr deletion completed in 6.095791965s • [SLOW TEST:12.381 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:51:01.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 10:51:01.552: INFO: Waiting up to 5m0s for pod "pod-784db7f1-943e-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-7v84q" to be "success or failure" May 12 10:51:01.556: INFO: Pod "pod-784db7f1-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151861ms May 12 10:51:04.040: INFO: Pod "pod-784db7f1-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487693949s May 12 10:51:06.182: INFO: Pod "pod-784db7f1-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630671844s May 12 10:51:08.267: INFO: Pod "pod-784db7f1-943e-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.715629094s STEP: Saw pod success May 12 10:51:08.268: INFO: Pod "pod-784db7f1-943e-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:51:08.270: INFO: Trying to get logs from node hunter-worker2 pod pod-784db7f1-943e-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:51:08.325: INFO: Waiting for pod pod-784db7f1-943e-11ea-92b2-0242ac11001c to disappear May 12 10:51:08.334: INFO: Pod pod-784db7f1-943e-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:51:08.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7v84q" for this suite. May 12 10:51:14.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:51:14.435: INFO: namespace: e2e-tests-emptydir-7v84q, resource: bindings, ignored listing per whitelist May 12 10:51:14.508: INFO: namespace e2e-tests-emptydir-7v84q deletion completed in 6.170532837s • [SLOW TEST:13.041 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:51:14.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:51:14.909: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 10:51:15.514: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 10:51:20.656: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 10:51:22.795: INFO: Creating deployment "test-rolling-update-deployment" May 12 10:51:22.799: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 10:51:22.863: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 10:51:25.036: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 10:51:25.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:51:27.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:51:29.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877483, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877482, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:51:31.042: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 10:51:31.050: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-gmh2s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gmh2s/deployments/test-rolling-update-deployment,UID:84f842f3-943e-11ea-99e8-0242ac110002,ResourceVersion:10150102,Generation:1,CreationTimestamp:2020-05-12 10:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 10:51:23 +0000 UTC 2020-05-12 10:51:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 10:51:29 +0000 UTC 2020-05-12 10:51:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 10:51:31.053: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-gmh2s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gmh2s/replicasets/test-rolling-update-deployment-75db98fb4c,UID:85035ad6-943e-11ea-99e8-0242ac110002,ResourceVersion:10150091,Generation:1,CreationTimestamp:2020-05-12 10:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 84f842f3-943e-11ea-99e8-0242ac110002 0xc0009525e7 0xc0009525e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 10:51:31.053: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 10:51:31.053: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-gmh2s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gmh2s/replicasets/test-rolling-update-controller,UID:8044f3b3-943e-11ea-99e8-0242ac110002,ResourceVersion:10150100,Generation:2,CreationTimestamp:2020-05-12 10:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 84f842f3-943e-11ea-99e8-0242ac110002 0xc00095250f 0xc000952520}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:51:31.056: INFO: Pod "test-rolling-update-deployment-75db98fb4c-9xfmn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-9xfmn,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-gmh2s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmh2s/pods/test-rolling-update-deployment-75db98fb4c-9xfmn,UID:850c5055-943e-11ea-99e8-0242ac110002,ResourceVersion:10150090,Generation:0,CreationTimestamp:2020-05-12 10:51:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 85035ad6-943e-11ea-99e8-0242ac110002 0xc002192527 0xc002192528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-82tvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-82tvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-82tvh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021928f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002192910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:51:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:51:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:51:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:51:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.234,StartTime:2020-05-12 10:51:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 10:51:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f04c03d494eb89dd6aa1704831e9cc5d0cadbfec41fae01bf1120479d0e1dad8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:51:31.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gmh2s" for this suite. May 12 10:51:47.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:51:47.656: INFO: namespace: e2e-tests-deployment-gmh2s, resource: bindings, ignored listing per whitelist May 12 10:51:47.709: INFO: namespace e2e-tests-deployment-gmh2s deletion completed in 16.651127998s • [SLOW TEST:33.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:51:47.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 10:51:55.583: INFO: Successfully updated pod "annotationupdate9442f064-943e-11ea-92b2-0242ac11001c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:51:57.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7dljc" for this suite. May 12 10:52:22.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:52:22.242: INFO: namespace: e2e-tests-projected-7dljc, resource: bindings, ignored listing per whitelist May 12 10:52:22.370: INFO: namespace e2e-tests-projected-7dljc deletion completed in 24.424109391s • [SLOW TEST:34.661 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:52:22.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 10:52:23.321: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 10:52:28.628: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:52:29.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-zwgnh" for this suite. May 12 10:52:38.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:52:38.613: INFO: namespace: e2e-tests-replication-controller-zwgnh, resource: bindings, ignored listing per whitelist May 12 10:52:38.618: INFO: namespace e2e-tests-replication-controller-zwgnh deletion completed in 8.690677237s • [SLOW TEST:16.247 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:52:38.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 10:52:43.441: INFO: Successfully updated pod "pod-update-b25553b6-943e-11ea-92b2-0242ac11001c" STEP: verifying the updated pod is in kubernetes May 12 10:52:43.480: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:52:43.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hdgt5" for this suite. May 12 10:53:07.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:53:07.609: INFO: namespace: e2e-tests-pods-hdgt5, resource: bindings, ignored listing per whitelist May 12 10:53:07.723: INFO: namespace e2e-tests-pods-hdgt5 deletion completed in 24.215030922s • [SLOW TEST:29.105 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:53:07.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c3a0928c-943e-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 10:53:08.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-qsstm" to be "success or failure" May 12 10:53:08.066: INFO: Pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.043337ms May 12 10:53:10.167: INFO: Pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128967469s May 12 10:53:12.170: INFO: Pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13246297s May 12 10:53:14.175: INFO: Pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.136857384s May 12 10:53:16.179: INFO: Pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.140539493s STEP: Saw pod success May 12 10:53:16.179: INFO: Pod "pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:53:16.181: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 10:53:16.375: INFO: Waiting for pod pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c to disappear May 12 10:53:16.379: INFO: Pod pod-configmaps-c3a1264d-943e-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:53:16.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qsstm" for this suite. May 12 10:53:24.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:53:24.439: INFO: namespace: e2e-tests-configmap-qsstm, resource: bindings, ignored listing per whitelist May 12 10:53:24.469: INFO: namespace e2e-tests-configmap-qsstm deletion completed in 8.088026659s • [SLOW TEST:16.746 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:53:24.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:53:24.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-gpq44' May 12 10:53:31.789: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 10:53:31.789: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 12 10:53:35.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gpq44' May 12 10:53:36.123: INFO: stderr: "" May 12 10:53:36.123: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:53:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gpq44" for this suite. May 12 10:55:40.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:55:40.563: INFO: namespace: e2e-tests-kubectl-gpq44, resource: bindings, ignored listing per whitelist May 12 10:55:40.611: INFO: namespace e2e-tests-kubectl-gpq44 deletion completed in 2m4.483888667s • [SLOW TEST:136.141 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:55:40.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 12 10:55:47.089: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:56:12.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-wjncs" for this suite. May 12 10:56:20.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:56:21.032: INFO: namespace: e2e-tests-namespaces-wjncs, resource: bindings, ignored listing per whitelist May 12 10:56:21.053: INFO: namespace e2e-tests-namespaces-wjncs deletion completed in 8.804656137s STEP: Destroying namespace "e2e-tests-nsdeletetest-dpvv7" for this suite. May 12 10:56:21.055: INFO: Namespace e2e-tests-nsdeletetest-dpvv7 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4w5d6" for this suite. May 12 10:56:29.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:56:29.304: INFO: namespace: e2e-tests-nsdeletetest-4w5d6, resource: bindings, ignored listing per whitelist May 12 10:56:29.538: INFO: namespace e2e-tests-nsdeletetest-4w5d6 deletion completed in 8.483672574s • [SLOW TEST:48.928 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:56:29.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 10:56:30.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-95r49" to be "success or failure" May 12 10:56:30.194: INFO: Pod "downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 66.176303ms May 12 10:56:32.197: INFO: Pod "downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069242527s May 12 10:56:34.374: INFO: Pod "downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246141719s May 12 10:56:36.386: INFO: Pod "downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.258455824s STEP: Saw pod success May 12 10:56:36.386: INFO: Pod "downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:56:36.389: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 10:56:36.445: INFO: Waiting for pod downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c to disappear May 12 10:56:36.745: INFO: Pod downwardapi-volume-3c26a254-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:56:36.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-95r49" for this suite. May 12 10:56:42.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:56:42.805: INFO: namespace: e2e-tests-downward-api-95r49, resource: bindings, ignored listing per whitelist May 12 10:56:42.832: INFO: namespace e2e-tests-downward-api-95r49 deletion completed in 6.082741077s • [SLOW TEST:13.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:56:42.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c May 12 10:56:44.285: INFO: Pod name my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c: Found 0 pods out of 1 May 12 10:56:49.288: INFO: Pod name my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c: Found 1 pods out of 1 May 12 10:56:49.288: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c" are running May 12 10:56:51.326: INFO: Pod "my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c-tmns7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:56:44 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:56:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:56:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:56:44 +0000 UTC Reason: Message:}]) May 12 10:56:51.326: INFO: Trying to dial the pod May 12 10:56:56.338: INFO: Controller my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c: Got expected result from replica 1 [my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c-tmns7]: "my-hostname-basic-443b22c6-943f-11ea-92b2-0242ac11001c-tmns7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:56:56.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-8zj6k" for this suite. May 12 10:57:04.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:04.879: INFO: namespace: e2e-tests-replication-controller-8zj6k, resource: bindings, ignored listing per whitelist May 12 10:57:04.928: INFO: namespace e2e-tests-replication-controller-8zj6k deletion completed in 8.586474402s • [SLOW TEST:22.095 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:57:04.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 10:57:05.662: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:57:14.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-k8f98" for this suite. May 12 10:57:25.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:25.091: INFO: namespace: e2e-tests-init-container-k8f98, resource: bindings, ignored listing per whitelist May 12 10:57:25.181: INFO: namespace e2e-tests-init-container-k8f98 deletion completed in 10.148914567s • [SLOW TEST:20.253 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:57:25.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 10:57:33.319: INFO: Successfully updated pod "labelsupdate5da24775-943f-11ea-92b2-0242ac11001c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:57:35.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5pn8s" for this suite. May 12 10:57:57.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:57.874: INFO: namespace: e2e-tests-projected-5pn8s, resource: bindings, ignored listing per whitelist May 12 10:57:57.950: INFO: namespace e2e-tests-projected-5pn8s deletion completed in 22.280602861s • [SLOW TEST:32.768 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:57:57.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 12 10:57:58.067: INFO: Waiting up to 5m0s for pod "pod-7090055a-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-mvtnc" to be "success or failure" May 12 10:57:58.071: INFO: Pod "pod-7090055a-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.529003ms May 12 10:58:00.075: INFO: Pod "pod-7090055a-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00762207s May 12 10:58:02.078: INFO: Pod "pod-7090055a-943f-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.010889776s May 12 10:58:04.112: INFO: Pod "pod-7090055a-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045047743s STEP: Saw pod success May 12 10:58:04.112: INFO: Pod "pod-7090055a-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:58:04.115: INFO: Trying to get logs from node hunter-worker2 pod pod-7090055a-943f-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 10:58:04.161: INFO: Waiting for pod pod-7090055a-943f-11ea-92b2-0242ac11001c to disappear May 12 10:58:04.184: INFO: Pod pod-7090055a-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:58:04.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mvtnc" for this suite. May 12 10:58:10.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:58:10.298: INFO: namespace: e2e-tests-emptydir-mvtnc, resource: bindings, ignored listing per whitelist May 12 10:58:10.338: INFO: namespace e2e-tests-emptydir-mvtnc deletion completed in 6.149478836s • [SLOW TEST:12.388 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:58:10.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-77f57b46-943f-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:58:10.521: INFO: Waiting up to 5m0s for pod "pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-4bf8v" to be "success or failure" May 12 10:58:10.646: INFO: Pod "pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 124.334326ms May 12 10:58:12.649: INFO: Pod "pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127613606s May 12 10:58:14.668: INFO: Pod "pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147107883s May 12 10:58:16.771: INFO: Pod "pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249456434s STEP: Saw pod success May 12 10:58:16.771: INFO: Pod "pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:58:16.774: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 10:58:16.804: INFO: Waiting for pod pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c to disappear May 12 10:58:16.820: INFO: Pod pod-secrets-77fcfe1e-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:58:16.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4bf8v" for this suite. May 12 10:58:27.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:58:27.112: INFO: namespace: e2e-tests-secrets-4bf8v, resource: bindings, ignored listing per whitelist May 12 10:58:27.564: INFO: namespace e2e-tests-secrets-4bf8v deletion completed in 10.740727326s • [SLOW TEST:17.225 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:58:27.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 12 10:58:34.119: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-825f890d-943f-11ea-92b2-0242ac11001c", GenerateName:"", Namespace:"e2e-tests-pods-r72q6", SelfLink:"/api/v1/namespaces/e2e-tests-pods-r72q6/pods/pod-submit-remove-825f890d-943f-11ea-92b2-0242ac11001c", UID:"826f286a-943f-11ea-99e8-0242ac110002", ResourceVersion:"10151308", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724877908, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"937274749"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wpxmz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024ea8c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wpxmz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002763268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002655260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027632b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027632d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027632d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027632dc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877908, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877913, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877913, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877908, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.62", StartTime:(*v1.Time)(0xc00253ce20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00253ce40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://a22c714a57f2d8ef9243636b645d1bf7ad480a8219b093f2d5d8f6332906deb1"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 12 10:58:39.164: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:58:39.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r72q6" for this suite. May 12 10:58:47.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:58:47.395: INFO: namespace: e2e-tests-pods-r72q6, resource: bindings, ignored listing per whitelist May 12 10:58:47.440: INFO: namespace e2e-tests-pods-r72q6 deletion completed in 8.268435579s • [SLOW TEST:19.876 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:58:47.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8e61f091-943f-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:58:48.144: INFO: Waiting up to 5m0s for pod "pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-w66m2" to be "success or failure" May 12 10:58:48.340: INFO: Pod "pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 196.742377ms May 12 10:58:50.400: INFO: Pod "pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256629802s May 12 10:58:52.495: INFO: Pod "pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351858821s May 12 10:58:54.499: INFO: Pod "pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.354876298s STEP: Saw pod success May 12 10:58:54.499: INFO: Pod "pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:58:54.501: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 10:58:54.522: INFO: Waiting for pod pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c to disappear May 12 10:58:54.885: INFO: Pod pod-secrets-8e63c7da-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:58:54.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-w66m2" for this suite. May 12 10:59:02.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:59:03.000: INFO: namespace: e2e-tests-secrets-w66m2, resource: bindings, ignored listing per whitelist May 12 10:59:03.042: INFO: namespace e2e-tests-secrets-w66m2 deletion completed in 8.142703536s • [SLOW TEST:15.602 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:59:03.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 10:59:03.236: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:59:05.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-b9xlp" for this suite. May 12 10:59:11.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:59:11.832: INFO: namespace: e2e-tests-custom-resource-definition-b9xlp, resource: bindings, ignored listing per whitelist May 12 10:59:11.916: INFO: namespace e2e-tests-custom-resource-definition-b9xlp deletion completed in 6.464746517s • [SLOW TEST:8.874 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:59:11.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-9cb51e6e-943f-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 10:59:12.237: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-ffv8c" to be "success or failure" May 12 10:59:12.275: INFO: Pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.80665ms May 12 10:59:14.278: INFO: Pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040760197s May 12 10:59:16.605: INFO: Pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367396864s May 12 10:59:18.846: INFO: Pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.608408328s May 12 10:59:20.850: INFO: Pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.612405494s STEP: Saw pod success May 12 10:59:20.850: INFO: Pod "pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 10:59:20.853: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 10:59:21.078: INFO: Waiting for pod pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c to disappear May 12 10:59:21.084: INFO: Pod pod-projected-secrets-9cc69031-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:59:21.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ffv8c" for this suite. May 12 10:59:31.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:59:31.299: INFO: namespace: e2e-tests-projected-ffv8c, resource: bindings, ignored listing per whitelist May 12 10:59:31.320: INFO: namespace e2e-tests-projected-ffv8c deletion completed in 10.223115662s • [SLOW TEST:19.403 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:59:31.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 10:59:32.832: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 10:59:46.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-g8kmx" for this suite. May 12 10:59:54.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:59:54.788: INFO: namespace: e2e-tests-init-container-g8kmx, resource: bindings, ignored listing per whitelist May 12 10:59:54.832: INFO: namespace e2e-tests-init-container-g8kmx deletion completed in 8.08348308s • [SLOW TEST:23.511 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 10:59:54.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 10:59:55.058: INFO: Waiting up to 5m0s for pod "downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-fwkdr" to be "success or failure" May 12 10:59:55.167: INFO: Pod "downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 109.202134ms May 12 10:59:57.171: INFO: Pod "downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113041198s May 12 10:59:59.563: INFO: Pod "downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505115889s May 12 11:00:01.567: INFO: Pod "downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.508423319s STEP: Saw pod success May 12 11:00:01.567: INFO: Pod "downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:00:01.569: INFO: Trying to get logs from node hunter-worker2 pod downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 11:00:01.659: INFO: Waiting for pod downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c to disappear May 12 11:00:01.666: INFO: Pod downward-api-b646a5e1-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:00:01.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fwkdr" for this suite. May 12 11:00:07.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:00:07.710: INFO: namespace: e2e-tests-downward-api-fwkdr, resource: bindings, ignored listing per whitelist May 12 11:00:07.751: INFO: namespace e2e-tests-downward-api-fwkdr deletion completed in 6.082626509s • [SLOW TEST:12.919 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:00:07.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-bdf96f57-943f-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:00:08.029: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-xkzm7" to be "success or failure" May 12 11:00:08.058: INFO: Pod "pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.448883ms May 12 11:00:10.192: INFO: Pod "pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163156881s May 12 11:00:12.196: INFO: Pod "pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.166978841s May 12 11:00:14.199: INFO: Pod "pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169406955s STEP: Saw pod success May 12 11:00:14.199: INFO: Pod "pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:00:14.200: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 11:00:14.299: INFO: Waiting for pod pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c to disappear May 12 11:00:14.301: INFO: Pod pod-projected-configmaps-be078d74-943f-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:00:14.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xkzm7" for this suite. May 12 11:00:22.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:00:22.403: INFO: namespace: e2e-tests-projected-xkzm7, resource: bindings, ignored listing per whitelist May 12 11:00:22.421: INFO: namespace e2e-tests-projected-xkzm7 deletion completed in 8.116846989s • [SLOW TEST:14.670 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:00:22.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-l9gq2 May 12 11:00:28.657: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-l9gq2 STEP: checking the pod's current state and verifying that restartCount is present May 12 11:00:28.660: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:04:28.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-l9gq2" for this suite. May 12 11:04:39.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:04:39.266: INFO: namespace: e2e-tests-container-probe-l9gq2, resource: bindings, ignored listing per whitelist May 12 11:04:39.339: INFO: namespace e2e-tests-container-probe-l9gq2 deletion completed in 10.324769712s • [SLOW TEST:256.918 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:04:39.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qqxsh STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:04:39.694: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:05:12.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.67:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qqxsh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:05:12.290: INFO: >>> kubeConfig: /root/.kube/config I0512 11:05:12.311773 6 log.go:172] (0xc0000eaf20) (0xc0023197c0) Create stream I0512 11:05:12.311793 6 log.go:172] (0xc0000eaf20) (0xc0023197c0) Stream added, broadcasting: 1 I0512 11:05:12.313440 6 log.go:172] (0xc0000eaf20) Reply frame received for 1 I0512 11:05:12.313475 6 log.go:172] (0xc0000eaf20) (0xc002330000) Create stream I0512 11:05:12.313485 6 log.go:172] (0xc0000eaf20) (0xc002330000) Stream added, broadcasting: 3 I0512 11:05:12.314136 6 log.go:172] (0xc0000eaf20) Reply frame received for 3 I0512 11:05:12.314159 6 log.go:172] (0xc0000eaf20) (0xc001c53ae0) Create stream I0512 11:05:12.314167 6 log.go:172] (0xc0000eaf20) (0xc001c53ae0) Stream added, broadcasting: 5 I0512 11:05:12.314756 6 log.go:172] (0xc0000eaf20) Reply frame received for 5 I0512 11:05:12.386412 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0512 11:05:12.386434 6 log.go:172] (0xc002330000) (3) Data frame handling I0512 11:05:12.386451 6 log.go:172] (0xc002330000) (3) Data frame sent I0512 11:05:12.386471 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0512 11:05:12.386482 6 log.go:172] (0xc002330000) (3) Data frame handling I0512 11:05:12.386497 6 log.go:172] (0xc0000eaf20) Data frame received for 5 I0512 11:05:12.386505 6 log.go:172] (0xc001c53ae0) (5) Data frame handling I0512 11:05:12.388078 6 log.go:172] (0xc0000eaf20) Data frame received for 1 I0512 11:05:12.388098 6 log.go:172] (0xc0023197c0) (1) Data frame handling I0512 11:05:12.388158 6 log.go:172] (0xc0023197c0) (1) Data frame sent I0512 11:05:12.388171 6 log.go:172] (0xc0000eaf20) (0xc0023197c0) Stream removed, broadcasting: 1 I0512 11:05:12.388183 6 log.go:172] (0xc0000eaf20) Go away received I0512 11:05:12.388306 6 log.go:172] (0xc0000eaf20) (0xc0023197c0) Stream removed, broadcasting: 1 I0512 11:05:12.388321 6 log.go:172] (0xc0000eaf20) (0xc002330000) Stream removed, broadcasting: 3 I0512 11:05:12.388331 6 log.go:172] (0xc0000eaf20) (0xc001c53ae0) Stream removed, broadcasting: 5 May 12 11:05:12.388: INFO: Found all expected endpoints: [netserver-0] May 12 11:05:12.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.246:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qqxsh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:05:12.435: INFO: >>> kubeConfig: /root/.kube/config I0512 11:05:12.458983 6 log.go:172] (0xc0009a69a0) (0xc001c47540) Create stream I0512 11:05:12.459005 6 log.go:172] (0xc0009a69a0) (0xc001c47540) Stream added, broadcasting: 1 I0512 11:05:12.460553 6 log.go:172] (0xc0009a69a0) Reply frame received for 1 I0512 11:05:12.460593 6 log.go:172] (0xc0009a69a0) (0xc001c53c20) Create stream I0512 11:05:12.460606 6 log.go:172] (0xc0009a69a0) (0xc001c53c20) Stream added, broadcasting: 3 I0512 11:05:12.461393 6 log.go:172] (0xc0009a69a0) Reply frame received for 3 I0512 11:05:12.461419 6 log.go:172] (0xc0009a69a0) (0xc0023300a0) Create stream I0512 11:05:12.461430 6 log.go:172] (0xc0009a69a0) (0xc0023300a0) Stream added, broadcasting: 5 I0512 11:05:12.462078 6 log.go:172] (0xc0009a69a0) Reply frame received for 5 I0512 11:05:12.540845 6 log.go:172] (0xc0009a69a0) Data frame received for 3 I0512 11:05:12.540909 6 log.go:172] (0xc001c53c20) (3) Data frame handling I0512 11:05:12.540960 6 log.go:172] (0xc0009a69a0) Data frame received for 5 I0512 11:05:12.541001 6 log.go:172] (0xc0023300a0) (5) Data frame handling I0512 11:05:12.541043 6 log.go:172] (0xc001c53c20) (3) Data frame sent I0512 11:05:12.541066 6 log.go:172] (0xc0009a69a0) Data frame received for 3 I0512 11:05:12.541079 6 log.go:172] (0xc001c53c20) (3) Data frame handling I0512 11:05:12.542346 6 log.go:172] (0xc0009a69a0) Data frame received for 1 I0512 11:05:12.542368 6 log.go:172] (0xc001c47540) (1) Data frame handling I0512 11:05:12.542398 6 log.go:172] (0xc001c47540) (1) Data frame sent I0512 11:05:12.542602 6 log.go:172] (0xc0009a69a0) (0xc001c47540) Stream removed, broadcasting: 1 I0512 11:05:12.542631 6 log.go:172] (0xc0009a69a0) Go away received I0512 11:05:12.542722 6 log.go:172] (0xc0009a69a0) (0xc001c47540) Stream removed, broadcasting: 1 I0512 11:05:12.542742 6 log.go:172] (0xc0009a69a0) (0xc001c53c20) Stream removed, broadcasting: 3 I0512 11:05:12.542755 6 log.go:172] (0xc0009a69a0) (0xc0023300a0) Stream removed, broadcasting: 5 May 12 11:05:12.542: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:05:12.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-qqxsh" for this suite. May 12 11:05:42.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:05:42.608: INFO: namespace: e2e-tests-pod-network-test-qqxsh, resource: bindings, ignored listing per whitelist May 12 11:05:42.633: INFO: namespace e2e-tests-pod-network-test-qqxsh deletion completed in 30.086100501s • [SLOW TEST:63.293 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:05:42.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:05:43.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lllt8" for this suite. May 12 11:06:05.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:06:05.217: INFO: namespace: e2e-tests-pods-lllt8, resource: bindings, ignored listing per whitelist May 12 11:06:05.243: INFO: namespace e2e-tests-pods-lllt8 deletion completed in 22.201099507s • [SLOW TEST:22.610 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:06:05.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 12 11:06:06.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-ftdh7 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 12 11:06:18.580: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0512 11:06:18.513521 2521 log.go:172] (0xc0007d4420) (0xc0007466e0) Create stream\nI0512 11:06:18.513585 2521 log.go:172] (0xc0007d4420) (0xc0007466e0) Stream added, broadcasting: 1\nI0512 11:06:18.515844 2521 log.go:172] (0xc0007d4420) Reply frame received for 1\nI0512 11:06:18.515881 2521 log.go:172] (0xc0007d4420) (0xc0005f4000) Create stream\nI0512 11:06:18.515890 2521 log.go:172] (0xc0007d4420) (0xc0005f4000) Stream added, broadcasting: 3\nI0512 11:06:18.516737 2521 log.go:172] (0xc0007d4420) Reply frame received for 3\nI0512 11:06:18.516778 2521 log.go:172] (0xc0007d4420) (0xc0006c6000) Create stream\nI0512 11:06:18.516793 2521 log.go:172] (0xc0007d4420) (0xc0006c6000) Stream added, broadcasting: 5\nI0512 11:06:18.519783 2521 log.go:172] (0xc0007d4420) Reply frame received for 5\nI0512 11:06:18.519810 2521 log.go:172] (0xc0007d4420) (0xc0005f40a0) Create stream\nI0512 11:06:18.519817 2521 log.go:172] (0xc0007d4420) (0xc0005f40a0) Stream added, broadcasting: 7\nI0512 11:06:18.520545 2521 log.go:172] (0xc0007d4420) Reply frame received for 7\nI0512 11:06:18.520711 2521 log.go:172] (0xc0005f4000) (3) Writing data frame\nI0512 11:06:18.520799 2521 log.go:172] (0xc0005f4000) (3) Writing data frame\nI0512 11:06:18.521589 2521 log.go:172] (0xc0007d4420) Data frame received for 5\nI0512 11:06:18.521605 2521 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0512 11:06:18.521622 2521 log.go:172] (0xc0006c6000) (5) Data frame sent\nI0512 11:06:18.522116 2521 log.go:172] (0xc0007d4420) Data frame received for 5\nI0512 11:06:18.522127 2521 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0512 11:06:18.522136 2521 log.go:172] (0xc0006c6000) (5) Data frame sent\nI0512 11:06:18.558386 2521 log.go:172] (0xc0007d4420) Data frame received for 7\nI0512 11:06:18.558415 2521 log.go:172] (0xc0005f40a0) (7) Data frame handling\nI0512 11:06:18.558440 2521 log.go:172] (0xc0007d4420) Data frame received for 5\nI0512 11:06:18.558448 2521 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0512 11:06:18.558504 2521 log.go:172] (0xc0007d4420) Data frame received for 1\nI0512 11:06:18.558521 2521 log.go:172] (0xc0007466e0) (1) Data frame handling\nI0512 11:06:18.558532 2521 log.go:172] (0xc0007466e0) (1) Data frame sent\nI0512 11:06:18.558545 2521 log.go:172] (0xc0007d4420) (0xc0007466e0) Stream removed, broadcasting: 1\nI0512 11:06:18.558576 2521 log.go:172] (0xc0007d4420) (0xc0005f4000) Stream removed, broadcasting: 3\nI0512 11:06:18.558628 2521 log.go:172] (0xc0007d4420) (0xc0007466e0) Stream removed, broadcasting: 1\nI0512 11:06:18.558642 2521 log.go:172] (0xc0007d4420) (0xc0005f4000) Stream removed, broadcasting: 3\nI0512 11:06:18.558652 2521 log.go:172] (0xc0007d4420) (0xc0006c6000) Stream removed, broadcasting: 5\nI0512 11:06:18.558685 2521 log.go:172] (0xc0007d4420) Go away received\nI0512 11:06:18.558780 2521 log.go:172] (0xc0007d4420) (0xc0005f40a0) Stream removed, broadcasting: 7\n" May 12 11:06:18.580: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:06:20.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ftdh7" for this suite. May 12 11:06:26.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:06:26.826: INFO: namespace: e2e-tests-kubectl-ftdh7, resource: bindings, ignored listing per whitelist May 12 11:06:26.859: INFO: namespace e2e-tests-kubectl-ftdh7 deletion completed in 6.113193466s • [SLOW TEST:21.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:06:26.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9ff6096e-9440-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:06:27.091: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-t4rwp" to be "success or failure" May 12 11:06:27.106: INFO: Pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.998957ms May 12 11:06:29.109: INFO: Pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018264505s May 12 11:06:31.232: INFO: Pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140657935s May 12 11:06:33.236: INFO: Pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14441558s May 12 11:06:35.239: INFO: Pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147621322s STEP: Saw pod success May 12 11:06:35.239: INFO: Pod "pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:06:35.241: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 11:06:35.294: INFO: Waiting for pod pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c to disappear May 12 11:06:35.325: INFO: Pod pod-projected-secrets-9ff69354-9440-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:06:35.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t4rwp" for this suite. May 12 11:06:41.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:06:41.418: INFO: namespace: e2e-tests-projected-t4rwp, resource: bindings, ignored listing per whitelist May 12 11:06:41.469: INFO: namespace e2e-tests-projected-t4rwp deletion completed in 6.130264141s • [SLOW TEST:14.610 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:06:41.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:06:41.573: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-2wwsw" to be "success or failure" May 12 11:06:41.601: INFO: Pod "downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.573289ms May 12 11:06:43.683: INFO: Pod "downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109694677s May 12 11:06:45.700: INFO: Pod "downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.127016997s May 12 11:06:47.809: INFO: Pod "downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236592531s STEP: Saw pod success May 12 11:06:47.810: INFO: Pod "downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:06:47.958: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:06:48.139: INFO: Waiting for pod downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c to disappear May 12 11:06:48.159: INFO: Pod downwardapi-volume-a8997b58-9440-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:06:48.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2wwsw" for this suite. May 12 11:06:54.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:06:54.216: INFO: namespace: e2e-tests-projected-2wwsw, resource: bindings, ignored listing per whitelist May 12 11:06:54.248: INFO: namespace e2e-tests-projected-2wwsw deletion completed in 6.084951359s • [SLOW TEST:12.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:06:54.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gcjz2.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gcjz2.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gcjz2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gcjz2.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gcjz2.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gcjz2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 11:07:08.657: INFO: DNS probes using e2e-tests-dns-gcjz2/dns-test-b034f755-9440-11ea-92b2-0242ac11001c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:07:08.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-gcjz2" for this suite. May 12 11:07:16.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:07:16.767: INFO: namespace: e2e-tests-dns-gcjz2, resource: bindings, ignored listing per whitelist May 12 11:07:16.783: INFO: namespace e2e-tests-dns-gcjz2 deletion completed in 8.072754764s • [SLOW TEST:22.535 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:07:16.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-6tl5j/secret-test-bde9c2e8-9440-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:07:17.346: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-6tl5j" to be "success or failure" May 12 11:07:17.363: INFO: Pod "pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.488142ms May 12 11:07:19.630: INFO: Pod "pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283600824s May 12 11:07:21.677: INFO: Pod "pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330807461s May 12 11:07:23.887: INFO: Pod "pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.540569486s STEP: Saw pod success May 12 11:07:23.887: INFO: Pod "pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:07:23.890: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c container env-test: STEP: delete the pod May 12 11:07:23.941: INFO: Waiting for pod pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c to disappear May 12 11:07:24.163: INFO: Pod pod-configmaps-bdea10f6-9440-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:07:24.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6tl5j" for this suite. May 12 11:07:30.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:07:30.364: INFO: namespace: e2e-tests-secrets-6tl5j, resource: bindings, ignored listing per whitelist May 12 11:07:30.368: INFO: namespace e2e-tests-secrets-6tl5j deletion completed in 6.201476241s • [SLOW TEST:13.585 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:07:30.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 11:07:37.790: INFO: 10 pods remaining May 12 11:07:37.790: INFO: 9 pods has nil DeletionTimestamp May 12 11:07:37.790: INFO: May 12 11:07:40.890: INFO: 7 pods remaining May 12 11:07:40.890: INFO: 0 pods has nil DeletionTimestamp May 12 11:07:40.890: INFO: May 12 11:07:42.559: INFO: 0 pods remaining May 12 11:07:42.559: INFO: 0 pods has nil DeletionTimestamp May 12 11:07:42.559: INFO: STEP: Gathering metrics W0512 11:07:43.948260 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:07:43.948: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:07:43.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2wmm2" for this suite. May 12 11:07:50.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:07:50.069: INFO: namespace: e2e-tests-gc-2wmm2, resource: bindings, ignored listing per whitelist May 12 11:07:50.095: INFO: namespace e2e-tests-gc-2wmm2 deletion completed in 6.142054353s • [SLOW TEST:19.726 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:07:50.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-prtfh May 12 11:07:57.196: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-prtfh STEP: checking the pod's current state and verifying that restartCount is present May 12 11:07:57.198: INFO: Initial restart count of pod liveness-exec is 0 May 12 11:08:48.113: INFO: Restart count of pod e2e-tests-container-probe-prtfh/liveness-exec is now 1 (50.914571593s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:08:48.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-prtfh" for this suite. May 12 11:08:54.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:08:54.862: INFO: namespace: e2e-tests-container-probe-prtfh, resource: bindings, ignored listing per whitelist May 12 11:08:54.886: INFO: namespace e2e-tests-container-probe-prtfh deletion completed in 6.316171473s • [SLOW TEST:64.791 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:08:54.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:08:55.103: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:09:01.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-x4jgh" for this suite. May 12 11:09:41.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:41.432: INFO: namespace: e2e-tests-pods-x4jgh, resource: bindings, ignored listing per whitelist May 12 11:09:41.519: INFO: namespace e2e-tests-pods-x4jgh deletion completed in 40.169812758s • [SLOW TEST:46.633 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:09:41.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-13f02dab-9441-11ea-92b2-0242ac11001c STEP: Creating secret with name secret-projected-all-test-volume-13f02d64-9441-11ea-92b2-0242ac11001c STEP: Creating a pod to test Check all projections for projected volume plugin May 12 11:09:41.794: INFO: Waiting up to 5m0s for pod "projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-68nct" to be "success or failure" May 12 11:09:41.828: INFO: Pod "projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.959656ms May 12 11:09:43.832: INFO: Pod "projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038022677s May 12 11:09:45.853: INFO: Pod "projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059642911s May 12 11:09:47.857: INFO: Pod "projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06336331s STEP: Saw pod success May 12 11:09:47.857: INFO: Pod "projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:09:47.860: INFO: Trying to get logs from node hunter-worker pod projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c container projected-all-volume-test: STEP: delete the pod May 12 11:09:47.880: INFO: Waiting for pod projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c to disappear May 12 11:09:47.884: INFO: Pod projected-volume-13f02cf0-9441-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:09:47.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-68nct" for this suite. May 12 11:09:53.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:53.921: INFO: namespace: e2e-tests-projected-68nct, resource: bindings, ignored listing per whitelist May 12 11:09:53.972: INFO: namespace e2e-tests-projected-68nct deletion completed in 6.085103316s • [SLOW TEST:12.452 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:09:53.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:09:54.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-c6b52" to be "success or failure" May 12 11:09:54.901: INFO: Pod "downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 109.788472ms May 12 11:09:56.905: INFO: Pod "downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113850164s May 12 11:09:58.909: INFO: Pod "downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117451926s STEP: Saw pod success May 12 11:09:58.909: INFO: Pod "downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:09:58.913: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:09:58.943: INFO: Waiting for pod downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c to disappear May 12 11:09:58.948: INFO: Pod downwardapi-volume-1bbf6053-9441-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:09:58.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c6b52" for this suite. May 12 11:10:04.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:10:05.089: INFO: namespace: e2e-tests-downward-api-c6b52, resource: bindings, ignored listing per whitelist May 12 11:10:05.092: INFO: namespace e2e-tests-downward-api-c6b52 deletion completed in 6.137634351s • [SLOW TEST:11.120 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:10:05.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 12 11:10:05.714: INFO: Waiting up to 5m0s for pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d" in namespace "e2e-tests-svcaccounts-wfqhm" to be "success or failure" May 12 11:10:05.720: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.264523ms May 12 11:10:07.723: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008853508s May 12 11:10:09.727: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012874677s May 12 11:10:12.375: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.660623254s May 12 11:10:14.412: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.698000215s May 12 11:10:16.416: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.701499844s STEP: Saw pod success May 12 11:10:16.416: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d" satisfied condition "success or failure" May 12 11:10:16.418: INFO: Trying to get logs from node hunter-worker pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d container token-test: STEP: delete the pod May 12 11:10:16.476: INFO: Waiting for pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d to disappear May 12 11:10:16.495: INFO: Pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-j299d no longer exists STEP: Creating a pod to test consume service account root CA May 12 11:10:16.536: INFO: Waiting up to 5m0s for pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq" in namespace "e2e-tests-svcaccounts-wfqhm" to be "success or failure" May 12 11:10:16.556: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.514377ms May 12 11:10:18.560: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024361975s May 12 11:10:20.564: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02766508s May 12 11:10:22.593: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057375993s May 12 11:10:24.598: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062059961s May 12 11:10:26.752: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216242866s May 12 11:10:28.756: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.2197378s STEP: Saw pod success May 12 11:10:28.756: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq" satisfied condition "success or failure" May 12 11:10:28.758: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq container root-ca-test: STEP: delete the pod May 12 11:10:28.874: INFO: Waiting for pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq to disappear May 12 11:10:28.924: INFO: Pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-bdfqq no longer exists STEP: Creating a pod to test consume service account namespace May 12 11:10:28.928: INFO: Waiting up to 5m0s for pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk" in namespace "e2e-tests-svcaccounts-wfqhm" to be "success or failure" May 12 11:10:29.290: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk": Phase="Pending", Reason="", readiness=false. Elapsed: 362.324052ms May 12 11:10:31.294: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366589171s May 12 11:10:33.298: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370319743s May 12 11:10:35.302: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374335718s May 12 11:10:37.305: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377435263s May 12 11:10:39.309: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.380845211s STEP: Saw pod success May 12 11:10:39.309: INFO: Pod "pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk" satisfied condition "success or failure" May 12 11:10:39.311: INFO: Trying to get logs from node hunter-worker pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk container namespace-test: STEP: delete the pod May 12 11:10:39.526: INFO: Waiting for pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk to disappear May 12 11:10:39.601: INFO: Pod pod-service-account-22472e1a-9441-11ea-92b2-0242ac11001c-nz6pk no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:10:39.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-wfqhm" for this suite. May 12 11:10:45.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:10:45.715: INFO: namespace: e2e-tests-svcaccounts-wfqhm, resource: bindings, ignored listing per whitelist May 12 11:10:45.834: INFO: namespace e2e-tests-svcaccounts-wfqhm deletion completed in 6.229921384s • [SLOW TEST:40.742 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:10:45.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 12 11:10:46.044: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix129464962/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:10:46.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dq5k5" for this suite. May 12 11:10:52.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:10:52.218: INFO: namespace: e2e-tests-kubectl-dq5k5, resource: bindings, ignored listing per whitelist May 12 11:10:52.259: INFO: namespace e2e-tests-kubectl-dq5k5 deletion completed in 6.141870454s • [SLOW TEST:6.425 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:10:52.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 11:10:52.397: INFO: Waiting up to 5m0s for pod "downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-hnxdd" to be "success or failure" May 12 11:10:52.407: INFO: Pod "downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.840007ms May 12 11:10:54.573: INFO: Pod "downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176300005s May 12 11:10:56.576: INFO: Pod "downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179355373s May 12 11:10:58.579: INFO: Pod "downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.182246201s STEP: Saw pod success May 12 11:10:58.579: INFO: Pod "downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:10:58.581: INFO: Trying to get logs from node hunter-worker2 pod downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 11:10:58.676: INFO: Waiting for pod downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c to disappear May 12 11:10:58.685: INFO: Pod downward-api-3e1732dd-9441-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:10:58.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hnxdd" for this suite. May 12 11:11:04.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:04.750: INFO: namespace: e2e-tests-downward-api-hnxdd, resource: bindings, ignored listing per whitelist May 12 11:11:04.828: INFO: namespace e2e-tests-downward-api-hnxdd deletion completed in 6.140995687s • [SLOW TEST:12.569 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:11:04.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-gbwr STEP: Creating a pod to test atomic-volume-subpath May 12 11:11:04.957: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gbwr" in namespace "e2e-tests-subpath-dgc54" to be "success or failure" May 12 11:11:04.998: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Pending", Reason="", readiness=false. Elapsed: 40.802617ms May 12 11:11:07.118: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160990213s May 12 11:11:09.120: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163523141s May 12 11:11:11.394: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437352747s May 12 11:11:13.398: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=true. Elapsed: 8.44070022s May 12 11:11:15.401: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 10.444442341s May 12 11:11:17.404: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 12.447096712s May 12 11:11:19.409: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 14.452111637s May 12 11:11:21.412: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 16.455562311s May 12 11:11:23.416: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 18.459370944s May 12 11:11:25.420: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 20.463415165s May 12 11:11:27.424: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 22.467497504s May 12 11:11:29.562: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 24.605500083s May 12 11:11:31.566: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Running", Reason="", readiness=false. Elapsed: 26.60935053s May 12 11:11:33.585: INFO: Pod "pod-subpath-test-secret-gbwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.628269561s STEP: Saw pod success May 12 11:11:33.585: INFO: Pod "pod-subpath-test-secret-gbwr" satisfied condition "success or failure" May 12 11:11:33.587: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-gbwr container test-container-subpath-secret-gbwr: STEP: delete the pod May 12 11:11:33.608: INFO: Waiting for pod pod-subpath-test-secret-gbwr to disappear May 12 11:11:33.651: INFO: Pod pod-subpath-test-secret-gbwr no longer exists STEP: Deleting pod pod-subpath-test-secret-gbwr May 12 11:11:33.651: INFO: Deleting pod "pod-subpath-test-secret-gbwr" in namespace "e2e-tests-subpath-dgc54" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:11:33.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dgc54" for this suite. May 12 11:11:39.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:39.678: INFO: namespace: e2e-tests-subpath-dgc54, resource: bindings, ignored listing per whitelist May 12 11:11:39.728: INFO: namespace e2e-tests-subpath-dgc54 deletion completed in 6.072603242s • [SLOW TEST:34.899 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:11:39.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:11:39.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-hlllm' May 12 11:11:39.915: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 11:11:39.915: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 12 11:11:39.929: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 12 11:11:39.943: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 12 11:11:39.950: INFO: scanned /root for discovery docs: May 12 11:11:39.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-hlllm' May 12 11:11:58.039: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 11:11:58.039: INFO: stdout: "Created e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293\nScaling up e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 12 11:11:58.039: INFO: stdout: "Created e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293\nScaling up e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 12 11:11:58.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hlllm' May 12 11:11:58.152: INFO: stderr: "" May 12 11:11:58.152: INFO: stdout: "e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293-fkm76 e2e-test-nginx-rc-z84f9 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 12 11:12:03.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hlllm' May 12 11:12:03.263: INFO: stderr: "" May 12 11:12:03.263: INFO: stdout: "e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293-fkm76 " May 12 11:12:03.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293-fkm76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlllm' May 12 11:12:03.355: INFO: stderr: "" May 12 11:12:03.355: INFO: stdout: "true" May 12 11:12:03.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293-fkm76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlllm' May 12 11:12:03.453: INFO: stderr: "" May 12 11:12:03.453: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 12 11:12:03.453: INFO: e2e-test-nginx-rc-86f8f5f06cb60dcded9f010a86ace293-fkm76 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 12 11:12:03.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hlllm' May 12 11:12:03.584: INFO: stderr: "" May 12 11:12:03.584: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:12:03.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hlllm" for this suite. May 12 11:12:27.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:12:27.793: INFO: namespace: e2e-tests-kubectl-hlllm, resource: bindings, ignored listing per whitelist May 12 11:12:27.842: INFO: namespace e2e-tests-kubectl-hlllm deletion completed in 24.253839038s • [SLOW TEST:48.113 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:12:27.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:12:28.152: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 12 11:12:33.157: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 11:12:33.157: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 11:12:33.196: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-prrnp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-prrnp/deployments/test-cleanup-deployment,UID:7a2ae58a-9441-11ea-99e8-0242ac110002,ResourceVersion:10153817,Generation:1,CreationTimestamp:2020-05-12 11:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 12 11:12:33.221: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 12 11:12:33.222: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 12 11:12:33.222: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-prrnp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-prrnp/replicasets/test-cleanup-controller,UID:7725187c-9441-11ea-99e8-0242ac110002,ResourceVersion:10153818,Generation:1,CreationTimestamp:2020-05-12 11:12:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7a2ae58a-9441-11ea-99e8-0242ac110002 0xc0009538f7 0xc0009538f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 11:12:33.237: INFO: Pod "test-cleanup-controller-rsc6z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rsc6z,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-prrnp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-prrnp/pods/test-cleanup-controller-rsc6z,UID:772ecb03-9441-11ea-99e8-0242ac110002,ResourceVersion:10153813,Generation:0,CreationTimestamp:2020-05-12 11:12:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 7725187c-9441-11ea-99e8-0242ac110002 0xc001c3a447 0xc001c3a448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-d5557 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d5557,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-d5557 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c3a4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c3a4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:12:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:12:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:12:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:12:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.8,StartTime:2020-05-12 11:12:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 11:12:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ff3cc9a4175b7ccf46bd58f87e65b20c1ac3d0c6d9abd0e8d7d5808363faaa2b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:12:33.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-prrnp" for this suite. May 12 11:12:41.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:12:41.720: INFO: namespace: e2e-tests-deployment-prrnp, resource: bindings, ignored listing per whitelist May 12 11:12:41.722: INFO: namespace e2e-tests-deployment-prrnp deletion completed in 8.452499182s • [SLOW TEST:13.880 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:12:41.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7f95ca72-9441-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:12:42.468: INFO: Waiting up to 5m0s for pod "pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-fxks4" to be "success or failure" May 12 11:12:42.508: INFO: Pod "pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.495211ms May 12 11:12:44.904: INFO: Pod "pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435105419s May 12 11:12:46.907: INFO: Pod "pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438547837s May 12 11:12:48.911: INFO: Pod "pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.442560133s STEP: Saw pod success May 12 11:12:48.911: INFO: Pod "pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:12:48.915: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 11:12:49.076: INFO: Waiting for pod pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c to disappear May 12 11:12:49.174: INFO: Pod pod-secrets-7fa06f58-9441-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:12:49.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fxks4" for this suite. May 12 11:12:57.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:12:57.685: INFO: namespace: e2e-tests-secrets-fxks4, resource: bindings, ignored listing per whitelist May 12 11:12:57.722: INFO: namespace e2e-tests-secrets-fxks4 deletion completed in 8.544617675s • [SLOW TEST:16.000 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:12:57.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:13:06.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9zvpv" for this suite. May 12 11:13:52.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:13:52.679: INFO: namespace: e2e-tests-kubelet-test-9zvpv, resource: bindings, ignored listing per whitelist May 12 11:13:52.722: INFO: namespace e2e-tests-kubelet-test-9zvpv deletion completed in 46.095962138s • [SLOW TEST:55.000 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:13:52.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 12 11:13:53.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cfdwx' May 12 11:13:53.751: INFO: stderr: "" May 12 11:13:53.751: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:13:53.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfdwx' May 12 11:13:54.360: INFO: stderr: "" May 12 11:13:54.360: INFO: stdout: "update-demo-nautilus-cn25z update-demo-nautilus-q86ch " May 12 11:13:54.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cn25z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:13:54.670: INFO: stderr: "" May 12 11:13:54.670: INFO: stdout: "" May 12 11:13:54.670: INFO: update-demo-nautilus-cn25z is created but not running May 12 11:13:59.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfdwx' May 12 11:13:59.998: INFO: stderr: "" May 12 11:13:59.998: INFO: stdout: "update-demo-nautilus-cn25z update-demo-nautilus-q86ch " May 12 11:13:59.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cn25z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:00.102: INFO: stderr: "" May 12 11:14:00.102: INFO: stdout: "" May 12 11:14:00.102: INFO: update-demo-nautilus-cn25z is created but not running May 12 11:14:05.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:05.247: INFO: stderr: "" May 12 11:14:05.247: INFO: stdout: "update-demo-nautilus-cn25z update-demo-nautilus-q86ch " May 12 11:14:05.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cn25z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:05.344: INFO: stderr: "" May 12 11:14:05.344: INFO: stdout: "true" May 12 11:14:05.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cn25z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:05.445: INFO: stderr: "" May 12 11:14:05.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:14:05.445: INFO: validating pod update-demo-nautilus-cn25z May 12 11:14:05.448: INFO: got data: { "image": "nautilus.jpg" } May 12 11:14:05.448: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:14:05.449: INFO: update-demo-nautilus-cn25z is verified up and running May 12 11:14:05.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q86ch -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:05.548: INFO: stderr: "" May 12 11:14:05.548: INFO: stdout: "true" May 12 11:14:05.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q86ch -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:05.646: INFO: stderr: "" May 12 11:14:05.646: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:14:05.646: INFO: validating pod update-demo-nautilus-q86ch May 12 11:14:05.650: INFO: got data: { "image": "nautilus.jpg" } May 12 11:14:05.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:14:05.650: INFO: update-demo-nautilus-q86ch is verified up and running STEP: rolling-update to new replication controller May 12 11:14:05.653: INFO: scanned /root for discovery docs: May 12 11:14:05.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:31.584: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 11:14:31.584: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:14:31.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:31.812: INFO: stderr: "" May 12 11:14:31.812: INFO: stdout: "update-demo-kitten-2svtx update-demo-kitten-krb2v " May 12 11:14:31.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2svtx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:32.050: INFO: stderr: "" May 12 11:14:32.050: INFO: stdout: "true" May 12 11:14:32.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2svtx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:32.139: INFO: stderr: "" May 12 11:14:32.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 11:14:32.139: INFO: validating pod update-demo-kitten-2svtx May 12 11:14:32.142: INFO: got data: { "image": "kitten.jpg" } May 12 11:14:32.142: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 11:14:32.142: INFO: update-demo-kitten-2svtx is verified up and running May 12 11:14:32.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-krb2v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:32.227: INFO: stderr: "" May 12 11:14:32.227: INFO: stdout: "true" May 12 11:14:32.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-krb2v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfdwx' May 12 11:14:32.379: INFO: stderr: "" May 12 11:14:32.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 11:14:32.379: INFO: validating pod update-demo-kitten-krb2v May 12 11:14:32.384: INFO: got data: { "image": "kitten.jpg" } May 12 11:14:32.384: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 11:14:32.384: INFO: update-demo-kitten-krb2v is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:14:32.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cfdwx" for this suite. May 12 11:14:54.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:14:54.846: INFO: namespace: e2e-tests-kubectl-cfdwx, resource: bindings, ignored listing per whitelist May 12 11:14:54.853: INFO: namespace e2e-tests-kubectl-cfdwx deletion completed in 22.465669499s • [SLOW TEST:62.131 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:14:54.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-pqz97/configmap-test-cee57b58-9441-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:14:56.056: INFO: Waiting up to 5m0s for pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-pqz97" to be "success or failure" May 12 11:14:56.059: INFO: Pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.823314ms May 12 11:14:58.063: INFO: Pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006864926s May 12 11:15:00.079: INFO: Pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022828546s May 12 11:15:02.235: INFO: Pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179247783s May 12 11:15:04.241: INFO: Pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.185573088s STEP: Saw pod success May 12 11:15:04.242: INFO: Pod "pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:15:04.246: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c container env-test: STEP: delete the pod May 12 11:15:04.551: INFO: Waiting for pod pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c to disappear May 12 11:15:04.648: INFO: Pod pod-configmaps-cee5f2e1-9441-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:15:04.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pqz97" for this suite. May 12 11:15:10.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:15:10.724: INFO: namespace: e2e-tests-configmap-pqz97, resource: bindings, ignored listing per whitelist May 12 11:15:10.755: INFO: namespace e2e-tests-configmap-pqz97 deletion completed in 6.103990172s • [SLOW TEST:15.901 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:15:10.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-d8290f0d-9441-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:15:16.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zfkj7" for this suite. May 12 11:15:42.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:15:42.971: INFO: namespace: e2e-tests-configmap-zfkj7, resource: bindings, ignored listing per whitelist May 12 11:15:43.948: INFO: namespace e2e-tests-configmap-zfkj7 deletion completed in 27.043174656s • [SLOW TEST:33.193 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:15:43.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:15:46.248: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 11:15:46.566: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:46.582: INFO: Number of nodes with available pods: 0 May 12 11:15:46.582: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:47.586: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:47.588: INFO: Number of nodes with available pods: 0 May 12 11:15:47.588: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:49.470: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:49.746: INFO: Number of nodes with available pods: 0 May 12 11:15:49.747: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:50.757: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:50.759: INFO: Number of nodes with available pods: 0 May 12 11:15:50.759: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:51.818: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:51.820: INFO: Number of nodes with available pods: 0 May 12 11:15:51.820: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:52.608: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:52.610: INFO: Number of nodes with available pods: 0 May 12 11:15:52.610: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:53.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:53.622: INFO: Number of nodes with available pods: 0 May 12 11:15:53.622: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:55.087: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:55.296: INFO: Number of nodes with available pods: 0 May 12 11:15:55.296: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:55.586: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:55.588: INFO: Number of nodes with available pods: 0 May 12 11:15:55.589: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:56.614: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:56.667: INFO: Number of nodes with available pods: 1 May 12 11:15:56.667: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:57.587: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:57.591: INFO: Number of nodes with available pods: 1 May 12 11:15:57.591: INFO: Node hunter-worker is running more than one daemon pod May 12 11:15:59.153: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:59.230: INFO: Number of nodes with available pods: 2 May 12 11:15:59.230: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 12 11:15:59.661: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:15:59.661: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:15:59.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:00.720: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:00.721: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:00.724: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:01.719: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:01.719: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:01.719: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:01.723: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:02.719: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:02.719: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:02.719: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:02.723: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:03.720: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:03.720: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:03.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:03.722: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:04.720: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:04.720: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:04.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:04.724: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:05.720: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:05.720: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:05.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:05.724: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:06.721: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:06.721: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:06.721: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:06.725: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:07.721: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:07.721: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:07.721: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:07.724: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:08.719: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:08.719: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:08.719: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:08.722: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:09.719: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:09.719: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:09.719: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:09.721: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:10.720: INFO: Wrong image for pod: daemon-set-4nnlj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:10.720: INFO: Pod daemon-set-4nnlj is not available May 12 11:16:10.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:10.723: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:11.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:11.720: INFO: Pod daemon-set-tknqj is not available May 12 11:16:11.723: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:12.721: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:12.721: INFO: Pod daemon-set-tknqj is not available May 12 11:16:12.724: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:13.759: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:13.759: INFO: Pod daemon-set-tknqj is not available May 12 11:16:13.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:14.782: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:14.782: INFO: Pod daemon-set-tknqj is not available May 12 11:16:14.985: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:15.719: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:15.722: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:16.853: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:16.857: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:17.730: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:17.730: INFO: Pod daemon-set-s4qx2 is not available May 12 11:16:17.733: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:18.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:18.720: INFO: Pod daemon-set-s4qx2 is not available May 12 11:16:18.723: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:19.720: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:19.720: INFO: Pod daemon-set-s4qx2 is not available May 12 11:16:19.722: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:20.746: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:20.746: INFO: Pod daemon-set-s4qx2 is not available May 12 11:16:20.750: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:21.721: INFO: Wrong image for pod: daemon-set-s4qx2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 11:16:21.721: INFO: Pod daemon-set-s4qx2 is not available May 12 11:16:21.727: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:22.721: INFO: Pod daemon-set-zb98m is not available May 12 11:16:22.726: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 12 11:16:22.729: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:22.732: INFO: Number of nodes with available pods: 1 May 12 11:16:22.732: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:16:23.736: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:23.739: INFO: Number of nodes with available pods: 1 May 12 11:16:23.739: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:16:24.736: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:24.739: INFO: Number of nodes with available pods: 1 May 12 11:16:24.739: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:16:25.736: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:25.739: INFO: Number of nodes with available pods: 1 May 12 11:16:25.739: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:16:26.961: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:26.963: INFO: Number of nodes with available pods: 1 May 12 11:16:26.963: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:16:27.751: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:16:27.754: INFO: Number of nodes with available pods: 2 May 12 11:16:27.754: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8glsg, will wait for the garbage collector to delete the pods May 12 11:16:27.820: INFO: Deleting DaemonSet.extensions daemon-set took: 4.097779ms May 12 11:16:28.221: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.177373ms May 12 11:16:42.134: INFO: Number of nodes with available pods: 0 May 12 11:16:42.134: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:16:42.137: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8glsg/daemonsets","resourceVersion":"10154629"},"items":null} May 12 11:16:42.140: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8glsg/pods","resourceVersion":"10154629"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:16:42.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8glsg" for this suite. May 12 11:16:50.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:16:50.546: INFO: namespace: e2e-tests-daemonsets-8glsg, resource: bindings, ignored listing per whitelist May 12 11:16:50.570: INFO: namespace e2e-tests-daemonsets-8glsg deletion completed in 8.418349141s • [SLOW TEST:66.621 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:16:50.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:16:50.822: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 12 11:16:50.848: INFO: Number of nodes with available pods: 0 May 12 11:16:50.848: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 11:16:51.004: INFO: Number of nodes with available pods: 0 May 12 11:16:51.004: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:52.007: INFO: Number of nodes with available pods: 0 May 12 11:16:52.007: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:53.302: INFO: Number of nodes with available pods: 0 May 12 11:16:53.302: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:54.093: INFO: Number of nodes with available pods: 0 May 12 11:16:54.093: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:55.008: INFO: Number of nodes with available pods: 0 May 12 11:16:55.008: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:56.007: INFO: Number of nodes with available pods: 1 May 12 11:16:56.007: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 12 11:16:56.063: INFO: Number of nodes with available pods: 1 May 12 11:16:56.063: INFO: Number of running nodes: 0, number of available pods: 1 May 12 11:16:57.067: INFO: Number of nodes with available pods: 0 May 12 11:16:57.067: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 12 11:16:57.075: INFO: Number of nodes with available pods: 0 May 12 11:16:57.075: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:58.079: INFO: Number of nodes with available pods: 0 May 12 11:16:58.079: INFO: Node hunter-worker is running more than one daemon pod May 12 11:16:59.078: INFO: Number of nodes with available pods: 0 May 12 11:16:59.078: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:00.078: INFO: Number of nodes with available pods: 0 May 12 11:17:00.078: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:01.078: INFO: Number of nodes with available pods: 0 May 12 11:17:01.078: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:02.077: INFO: Number of nodes with available pods: 0 May 12 11:17:02.077: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:03.079: INFO: Number of nodes with available pods: 0 May 12 11:17:03.079: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:04.078: INFO: Number of nodes with available pods: 0 May 12 11:17:04.078: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:05.083: INFO: Number of nodes with available pods: 0 May 12 11:17:05.083: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:06.674: INFO: Number of nodes with available pods: 0 May 12 11:17:06.674: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:07.078: INFO: Number of nodes with available pods: 0 May 12 11:17:07.078: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:08.106: INFO: Number of nodes with available pods: 0 May 12 11:17:08.106: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:09.079: INFO: Number of nodes with available pods: 0 May 12 11:17:09.079: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:10.079: INFO: Number of nodes with available pods: 0 May 12 11:17:10.079: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:11.100: INFO: Number of nodes with available pods: 0 May 12 11:17:11.100: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:12.207: INFO: Number of nodes with available pods: 0 May 12 11:17:12.207: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:13.078: INFO: Number of nodes with available pods: 0 May 12 11:17:13.078: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:14.128: INFO: Number of nodes with available pods: 0 May 12 11:17:14.129: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:15.297: INFO: Number of nodes with available pods: 0 May 12 11:17:15.297: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:16.284: INFO: Number of nodes with available pods: 0 May 12 11:17:16.284: INFO: Node hunter-worker is running more than one daemon pod May 12 11:17:17.105: INFO: Number of nodes with available pods: 1 May 12 11:17:17.105: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mlkbw, will wait for the garbage collector to delete the pods May 12 11:17:17.171: INFO: Deleting DaemonSet.extensions daemon-set took: 7.413113ms May 12 11:17:17.571: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.263819ms May 12 11:17:32.010: INFO: Number of nodes with available pods: 0 May 12 11:17:32.010: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:17:32.013: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mlkbw/daemonsets","resourceVersion":"10154800"},"items":null} May 12 11:17:32.046: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mlkbw/pods","resourceVersion":"10154800"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:17:32.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mlkbw" for this suite. May 12 11:17:40.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:17:40.538: INFO: namespace: e2e-tests-daemonsets-mlkbw, resource: bindings, ignored listing per whitelist May 12 11:17:40.607: INFO: namespace e2e-tests-daemonsets-mlkbw deletion completed in 8.153701339s • [SLOW TEST:50.038 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:17:40.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 11:17:41.000: INFO: Waiting up to 5m0s for pod "pod-319bf16b-9442-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-qjndp" to be "success or failure" May 12 11:17:41.071: INFO: Pod "pod-319bf16b-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 70.803373ms May 12 11:17:43.074: INFO: Pod "pod-319bf16b-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074172462s May 12 11:17:45.078: INFO: Pod "pod-319bf16b-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078129071s May 12 11:17:47.081: INFO: Pod "pod-319bf16b-9442-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080967987s STEP: Saw pod success May 12 11:17:47.081: INFO: Pod "pod-319bf16b-9442-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:17:47.083: INFO: Trying to get logs from node hunter-worker2 pod pod-319bf16b-9442-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:17:47.095: INFO: Waiting for pod pod-319bf16b-9442-11ea-92b2-0242ac11001c to disappear May 12 11:17:47.128: INFO: Pod pod-319bf16b-9442-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:17:47.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qjndp" for this suite. May 12 11:17:55.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:17:55.173: INFO: namespace: e2e-tests-emptydir-qjndp, resource: bindings, ignored listing per whitelist May 12 11:17:55.603: INFO: namespace e2e-tests-emptydir-qjndp deletion completed in 8.472839184s • [SLOW TEST:14.996 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:17:55.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:17:56.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 12 11:17:57.317: INFO: stderr: "" May 12 11:17:57.317: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:17:57.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r2b9z" for this suite. May 12 11:18:03.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:18:03.981: INFO: namespace: e2e-tests-kubectl-r2b9z, resource: bindings, ignored listing per whitelist May 12 11:18:04.007: INFO: namespace e2e-tests-kubectl-r2b9z deletion completed in 6.541631266s • [SLOW TEST:8.403 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:18:04.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:18:04.401: INFO: Creating deployment "test-recreate-deployment" May 12 11:18:04.413: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 11:18:04.678: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 12 11:18:07.457: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 11:18:07.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879087, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879084, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:18:09.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879087, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879084, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:18:11.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879087, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879084, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:18:13.495: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 11:18:13.506: INFO: Updating deployment test-recreate-deployment May 12 11:18:13.506: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 11:18:14.245: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-q8xkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q8xkv/deployments/test-recreate-deployment,UID:3f99df72-9442-11ea-99e8-0242ac110002,ResourceVersion:10154975,Generation:2,CreationTimestamp:2020-05-12 11:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-12 11:18:13 +0000 UTC 2020-05-12 11:18:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-12 11:18:14 +0000 UTC 2020-05-12 11:18:04 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 12 11:18:14.247: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-q8xkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q8xkv/replicasets/test-recreate-deployment-589c4bfd,UID:452df483-9442-11ea-99e8-0242ac110002,ResourceVersion:10154973,Generation:1,CreationTimestamp:2020-05-12 11:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3f99df72-9442-11ea-99e8-0242ac110002 0xc00156634f 0xc001566360}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 11:18:14.247: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 11:18:14.247: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-q8xkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q8xkv/replicasets/test-recreate-deployment-5bf7f65dc,UID:3fc3e3fb-9442-11ea-99e8-0242ac110002,ResourceVersion:10154963,Generation:2,CreationTimestamp:2020-05-12 11:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3f99df72-9442-11ea-99e8-0242ac110002 0xc001566440 0xc001566441}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 11:18:14.368: INFO: Pod "test-recreate-deployment-589c4bfd-md6gd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-md6gd,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-q8xkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q8xkv/pods/test-recreate-deployment-589c4bfd-md6gd,UID:452fa11c-9442-11ea-99e8-0242ac110002,ResourceVersion:10154976,Generation:0,CreationTimestamp:2020-05-12 11:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 452df483-9442-11ea-99e8-0242ac110002 0xc0015677df 0xc0015677f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2jkh5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2jkh5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2jkh5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001567860} {node.kubernetes.io/unreachable Exists NoExecute 0xc001567880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:18:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:18:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:18:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:18:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-12 11:18:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:18:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-q8xkv" for this suite. May 12 11:18:20.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:18:20.423: INFO: namespace: e2e-tests-deployment-q8xkv, resource: bindings, ignored listing per whitelist May 12 11:18:20.463: INFO: namespace e2e-tests-deployment-q8xkv deletion completed in 6.091491848s • [SLOW TEST:16.455 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:18:20.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-496ddf0a-9442-11ea-92b2-0242ac11001c STEP: Creating configMap with name cm-test-opt-upd-496ddf6e-9442-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-496ddf0a-9442-11ea-92b2-0242ac11001c STEP: Updating configmap cm-test-opt-upd-496ddf6e-9442-11ea-92b2-0242ac11001c STEP: Creating configMap with name cm-test-opt-create-496ddfa0-9442-11ea-92b2-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:18:33.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8wbtd" for this suite. May 12 11:18:55.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:18:55.505: INFO: namespace: e2e-tests-projected-8wbtd, resource: bindings, ignored listing per whitelist May 12 11:18:55.656: INFO: namespace e2e-tests-projected-8wbtd deletion completed in 22.458763543s • [SLOW TEST:35.193 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:18:55.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 11:18:55.919: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xdw5h,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdw5h/configmaps/e2e-watch-test-watch-closed,UID:5e42fd6b-9442-11ea-99e8-0242ac110002,ResourceVersion:10155131,Generation:0,CreationTimestamp:2020-05-12 11:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:18:55.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xdw5h,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdw5h/configmaps/e2e-watch-test-watch-closed,UID:5e42fd6b-9442-11ea-99e8-0242ac110002,ResourceVersion:10155132,Generation:0,CreationTimestamp:2020-05-12 11:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 11:18:55.981: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xdw5h,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdw5h/configmaps/e2e-watch-test-watch-closed,UID:5e42fd6b-9442-11ea-99e8-0242ac110002,ResourceVersion:10155133,Generation:0,CreationTimestamp:2020-05-12 11:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:18:55.981: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xdw5h,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdw5h/configmaps/e2e-watch-test-watch-closed,UID:5e42fd6b-9442-11ea-99e8-0242ac110002,ResourceVersion:10155134,Generation:0,CreationTimestamp:2020-05-12 11:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:18:55.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-xdw5h" for this suite. May 12 11:19:04.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:19:04.124: INFO: namespace: e2e-tests-watch-xdw5h, resource: bindings, ignored listing per whitelist May 12 11:19:04.169: INFO: namespace e2e-tests-watch-xdw5h deletion completed in 8.16190251s • [SLOW TEST:8.513 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:19:04.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 11:19:05.684: INFO: PodSpec: initContainers in spec.initContainers May 12 11:20:15.498: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6420c2d3-9442-11ea-92b2-0242ac11001c", GenerateName:"", Namespace:"e2e-tests-init-container-mjh8s", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-mjh8s/pods/pod-init-6420c2d3-9442-11ea-92b2-0242ac11001c", UID:"645e913e-9442-11ea-99e8-0242ac110002", ResourceVersion:"10155308", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724879146, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"684252506"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9cz7l", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026ee000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9cz7l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9cz7l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9cz7l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a16088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023ba180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a16110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a16130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a16138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a1613c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879147, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879147, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879147, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879146, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.92", StartTime:(*v1.Time)(0xc002482040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002482080), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00002e0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://16a2c0d161219a56ac73756d210e9e6ce0c86b917c612518dab2c3a0e12d14e4"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024820a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002482060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:20:15.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-mjh8s" for this suite. May 12 11:20:42.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:20:42.634: INFO: namespace: e2e-tests-init-container-mjh8s, resource: bindings, ignored listing per whitelist May 12 11:20:42.670: INFO: namespace e2e-tests-init-container-mjh8s deletion completed in 26.615551987s • [SLOW TEST:98.501 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:20:42.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:20:43.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5rbhg' May 12 11:20:48.890: INFO: stderr: "" May 12 11:20:48.890: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 12 11:20:53.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5rbhg -o json' May 12 11:20:54.039: INFO: stderr: "" May 12 11:20:54.039: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T11:20:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-5rbhg\",\n \"resourceVersion\": \"10155405\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-5rbhg/pods/e2e-test-nginx-pod\",\n \"uid\": \"a1a2336c-9442-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-69d2w\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-69d2w\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-69d2w\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:20:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:20:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:20:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:20:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://57d4c0e6c686600c9d6dee09ab964d52705df7c3468d5aad3e2bbf0bd550a187\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T11:20:52Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.19\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T11:20:48Z\"\n }\n}\n" STEP: replace the image in the pod May 12 11:20:54.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-5rbhg' May 12 11:20:54.682: INFO: stderr: "" May 12 11:20:54.682: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 12 11:20:54.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5rbhg' May 12 11:21:04.679: INFO: stderr: "" May 12 11:21:04.679: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:21:04.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5rbhg" for this suite. May 12 11:21:11.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:21:11.435: INFO: namespace: e2e-tests-kubectl-5rbhg, resource: bindings, ignored listing per whitelist May 12 11:21:11.447: INFO: namespace e2e-tests-kubectl-5rbhg deletion completed in 6.353637153s • [SLOW TEST:28.777 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:21:11.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-afa76f98-9442-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:21:12.538: INFO: Waiting up to 5m0s for pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-7kj4c" to be "success or failure" May 12 11:21:12.585: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.848467ms May 12 11:21:14.899: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360873783s May 12 11:21:17.039: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500971174s May 12 11:21:19.042: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503840852s May 12 11:21:21.046: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507698965s May 12 11:21:23.049: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.511491831s STEP: Saw pod success May 12 11:21:23.050: INFO: Pod "pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:21:23.052: INFO: Trying to get logs from node hunter-worker pod pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 11:21:23.307: INFO: Waiting for pod pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c to disappear May 12 11:21:23.315: INFO: Pod pod-secrets-afbc5c8f-9442-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:21:23.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7kj4c" for this suite. May 12 11:21:29.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:21:29.411: INFO: namespace: e2e-tests-secrets-7kj4c, resource: bindings, ignored listing per whitelist May 12 11:21:29.432: INFO: namespace e2e-tests-secrets-7kj4c deletion completed in 6.114552094s • [SLOW TEST:17.985 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:21:29.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 12 11:21:29.627: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155527,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:21:29.627: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155527,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 12 11:21:39.633: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155547,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 11:21:39.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155547,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 12 11:21:49.640: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155567,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:21:49.640: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155567,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 12 11:21:59.648: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155587,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:21:59.648: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-a,UID:b9ec3951-9442-11ea-99e8-0242ac110002,ResourceVersion:10155587,Generation:0,CreationTimestamp:2020-05-12 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 12 11:22:09.656: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-b,UID:d1c7d4db-9442-11ea-99e8-0242ac110002,ResourceVersion:10155606,Generation:0,CreationTimestamp:2020-05-12 11:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:22:09.656: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-b,UID:d1c7d4db-9442-11ea-99e8-0242ac110002,ResourceVersion:10155606,Generation:0,CreationTimestamp:2020-05-12 11:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 12 11:22:19.716: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-b,UID:d1c7d4db-9442-11ea-99e8-0242ac110002,ResourceVersion:10155626,Generation:0,CreationTimestamp:2020-05-12 11:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:22:19.716: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kml2l,SelfLink:/api/v1/namespaces/e2e-tests-watch-kml2l/configmaps/e2e-watch-test-configmap-b,UID:d1c7d4db-9442-11ea-99e8-0242ac110002,ResourceVersion:10155626,Generation:0,CreationTimestamp:2020-05-12 11:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:22:29.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-kml2l" for this suite. May 12 11:22:35.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:22:35.999: INFO: namespace: e2e-tests-watch-kml2l, resource: bindings, ignored listing per whitelist May 12 11:22:36.002: INFO: namespace e2e-tests-watch-kml2l deletion completed in 6.174535987s • [SLOW TEST:66.570 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:22:36.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 12 11:22:36.155: INFO: Waiting up to 5m0s for pod "client-containers-e190396e-9442-11ea-92b2-0242ac11001c" in namespace "e2e-tests-containers-pq6zr" to be "success or failure" May 12 11:22:36.165: INFO: Pod "client-containers-e190396e-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.950499ms May 12 11:22:38.732: INFO: Pod "client-containers-e190396e-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57692877s May 12 11:22:40.737: INFO: Pod "client-containers-e190396e-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581244534s May 12 11:22:42.740: INFO: Pod "client-containers-e190396e-9442-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585079264s STEP: Saw pod success May 12 11:22:42.740: INFO: Pod "client-containers-e190396e-9442-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:22:42.746: INFO: Trying to get logs from node hunter-worker pod client-containers-e190396e-9442-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:22:43.220: INFO: Waiting for pod client-containers-e190396e-9442-11ea-92b2-0242ac11001c to disappear May 12 11:22:43.493: INFO: Pod client-containers-e190396e-9442-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:22:43.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pq6zr" for this suite. May 12 11:22:49.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:22:49.759: INFO: namespace: e2e-tests-containers-pq6zr, resource: bindings, ignored listing per whitelist May 12 11:22:49.783: INFO: namespace e2e-tests-containers-pq6zr deletion completed in 6.220618882s • [SLOW TEST:13.781 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:22:49.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:22:49.990: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 50.418441ms) May 12 11:22:49.994: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.752024ms) May 12 11:22:49.997: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.350824ms) May 12 11:22:50.000: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.704053ms) May 12 11:22:50.003: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.128691ms) May 12 11:22:50.006: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.078856ms) May 12 11:22:50.038: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 32.120979ms) May 12 11:22:50.042: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.512929ms) May 12 11:22:50.045: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.972997ms) May 12 11:22:50.048: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.146079ms) May 12 11:22:50.051: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.107777ms) May 12 11:22:50.055: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.105719ms) May 12 11:22:50.058: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.278819ms) May 12 11:22:50.061: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.645734ms) May 12 11:22:50.065: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.047557ms) May 12 11:22:50.068: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.070339ms) May 12 11:22:50.071: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.146254ms) May 12 11:22:50.074: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.28279ms) May 12 11:22:50.077: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.901547ms) May 12 11:22:50.080: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.850786ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:22:50.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-rf77t" for this suite. May 12 11:22:56.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:22:56.162: INFO: namespace: e2e-tests-proxy-rf77t, resource: bindings, ignored listing per whitelist May 12 11:22:56.168: INFO: namespace e2e-tests-proxy-rf77t deletion completed in 6.08463322s • [SLOW TEST:6.384 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:22:56.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:22:56.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-6ckrs" to be "success or failure" May 12 11:22:56.633: INFO: Pod "downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.570578ms May 12 11:22:58.636: INFO: Pod "downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032852336s May 12 11:23:00.639: INFO: Pod "downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035583077s STEP: Saw pod success May 12 11:23:00.639: INFO: Pod "downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:23:00.641: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:23:00.847: INFO: Waiting for pod downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c to disappear May 12 11:23:00.866: INFO: Pod downwardapi-volume-edc35133-9442-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:23:00.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6ckrs" for this suite. May 12 11:23:08.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:23:08.923: INFO: namespace: e2e-tests-projected-6ckrs, resource: bindings, ignored listing per whitelist May 12 11:23:08.946: INFO: namespace e2e-tests-projected-6ckrs deletion completed in 8.077394027s • [SLOW TEST:12.778 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:23:08.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 11:23:09.317: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:09.319: INFO: Number of nodes with available pods: 0 May 12 11:23:09.319: INFO: Node hunter-worker is running more than one daemon pod May 12 11:23:10.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:10.327: INFO: Number of nodes with available pods: 0 May 12 11:23:10.327: INFO: Node hunter-worker is running more than one daemon pod May 12 11:23:11.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:11.328: INFO: Number of nodes with available pods: 0 May 12 11:23:11.328: INFO: Node hunter-worker is running more than one daemon pod May 12 11:23:12.524: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:12.527: INFO: Number of nodes with available pods: 0 May 12 11:23:12.527: INFO: Node hunter-worker is running more than one daemon pod May 12 11:23:13.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:13.623: INFO: Number of nodes with available pods: 0 May 12 11:23:13.623: INFO: Node hunter-worker is running more than one daemon pod May 12 11:23:14.354: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:14.357: INFO: Number of nodes with available pods: 0 May 12 11:23:14.357: INFO: Node hunter-worker is running more than one daemon pod May 12 11:23:15.350: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:15.352: INFO: Number of nodes with available pods: 2 May 12 11:23:15.352: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 12 11:23:15.384: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:23:15.543: INFO: Number of nodes with available pods: 2 May 12 11:23:15.543: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kxk8l, will wait for the garbage collector to delete the pods May 12 11:23:16.704: INFO: Deleting DaemonSet.extensions daemon-set took: 50.719612ms May 12 11:23:16.804: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.195934ms May 12 11:23:31.451: INFO: Number of nodes with available pods: 0 May 12 11:23:31.451: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:23:31.455: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kxk8l/daemonsets","resourceVersion":"10155873"},"items":null} May 12 11:23:31.990: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kxk8l/pods","resourceVersion":"10155874"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:23:31.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kxk8l" for this suite. May 12 11:23:40.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:23:40.150: INFO: namespace: e2e-tests-daemonsets-kxk8l, resource: bindings, ignored listing per whitelist May 12 11:23:40.197: INFO: namespace e2e-tests-daemonsets-kxk8l deletion completed in 8.197139312s • [SLOW TEST:31.252 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:23:40.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-r8nwr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8nwr to expose endpoints map[] May 12 11:23:40.499: INFO: Get endpoints failed (57.734608ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 12 11:23:41.502: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8nwr exposes endpoints map[] (1.061500328s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-r8nwr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8nwr to expose endpoints map[pod1:[80]] May 12 11:23:47.512: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.003527587s elapsed, will retry) May 12 11:23:50.784: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8nwr exposes endpoints map[pod1:[80]] (9.275085282s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-r8nwr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8nwr to expose endpoints map[pod1:[80] pod2:[80]] May 12 11:23:55.838: INFO: Unexpected endpoints: found map[0887df13-9443-11ea-99e8-0242ac110002:[80]], expected map[pod1:[80] pod2:[80]] (5.051784845s elapsed, will retry) May 12 11:23:57.098: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8nwr exposes endpoints map[pod1:[80] pod2:[80]] (6.311946911s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-r8nwr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8nwr to expose endpoints map[pod2:[80]] May 12 11:23:58.413: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8nwr exposes endpoints map[pod2:[80]] (1.310202819s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-r8nwr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8nwr to expose endpoints map[] May 12 11:24:00.128: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8nwr exposes endpoints map[] (1.711459359s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:24:01.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-r8nwr" for this suite. May 12 11:24:27.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:24:27.462: INFO: namespace: e2e-tests-services-r8nwr, resource: bindings, ignored listing per whitelist May 12 11:24:27.489: INFO: namespace e2e-tests-services-r8nwr deletion completed in 26.333496501s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.291 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:24:27.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 11:24:27.796: INFO: Waiting up to 5m0s for pod "pod-2415afd3-9443-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-v8b92" to be "success or failure" May 12 11:24:27.889: INFO: Pod "pod-2415afd3-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 93.176646ms May 12 11:24:29.892: INFO: Pod "pod-2415afd3-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096481802s May 12 11:24:32.003: INFO: Pod "pod-2415afd3-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206723055s May 12 11:24:34.007: INFO: Pod "pod-2415afd3-9443-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210756209s STEP: Saw pod success May 12 11:24:34.007: INFO: Pod "pod-2415afd3-9443-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:24:34.009: INFO: Trying to get logs from node hunter-worker2 pod pod-2415afd3-9443-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:24:34.412: INFO: Waiting for pod pod-2415afd3-9443-11ea-92b2-0242ac11001c to disappear May 12 11:24:34.419: INFO: Pod pod-2415afd3-9443-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:24:34.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v8b92" for this suite. May 12 11:24:40.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:24:40.508: INFO: namespace: e2e-tests-emptydir-v8b92, resource: bindings, ignored listing per whitelist May 12 11:24:40.550: INFO: namespace e2e-tests-emptydir-v8b92 deletion completed in 6.128583389s • [SLOW TEST:13.060 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:24:40.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-5x8l STEP: Creating a pod to test atomic-volume-subpath May 12 11:24:40.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5x8l" in namespace "e2e-tests-subpath-tr9zj" to be "success or failure" May 12 11:24:40.712: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176066ms May 12 11:24:42.716: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012473812s May 12 11:24:44.720: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015829239s May 12 11:24:46.724: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020238283s May 12 11:24:48.729: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=true. Elapsed: 8.024928633s May 12 11:24:50.733: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 10.029556257s May 12 11:24:52.738: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 12.03401097s May 12 11:24:54.741: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 14.037166253s May 12 11:24:56.744: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 16.040425272s May 12 11:24:58.831: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 18.127354525s May 12 11:25:00.835: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 20.131186661s May 12 11:25:02.839: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 22.135243454s May 12 11:25:04.843: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Running", Reason="", readiness=false. Elapsed: 24.138909848s May 12 11:25:06.846: INFO: Pod "pod-subpath-test-downwardapi-5x8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.142141014s STEP: Saw pod success May 12 11:25:06.846: INFO: Pod "pod-subpath-test-downwardapi-5x8l" satisfied condition "success or failure" May 12 11:25:06.848: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-5x8l container test-container-subpath-downwardapi-5x8l: STEP: delete the pod May 12 11:25:06.910: INFO: Waiting for pod pod-subpath-test-downwardapi-5x8l to disappear May 12 11:25:06.949: INFO: Pod pod-subpath-test-downwardapi-5x8l no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5x8l May 12 11:25:06.949: INFO: Deleting pod "pod-subpath-test-downwardapi-5x8l" in namespace "e2e-tests-subpath-tr9zj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:25:06.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-tr9zj" for this suite. May 12 11:25:15.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:15.138: INFO: namespace: e2e-tests-subpath-tr9zj, resource: bindings, ignored listing per whitelist May 12 11:25:15.177: INFO: namespace e2e-tests-subpath-tr9zj deletion completed in 8.221635371s • [SLOW TEST:34.626 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:25:15.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:25:16.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-p68kj" to be "success or failure" May 12 11:25:16.677: INFO: Pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.281237ms May 12 11:25:18.988: INFO: Pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323360001s May 12 11:25:21.022: INFO: Pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357740522s May 12 11:25:23.274: INFO: Pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609586554s May 12 11:25:25.278: INFO: Pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.614035128s STEP: Saw pod success May 12 11:25:25.278: INFO: Pod "downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:25:25.281: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:25:25.855: INFO: Waiting for pod downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c to disappear May 12 11:25:25.869: INFO: Pod downwardapi-volume-411c3735-9443-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:25:25.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p68kj" for this suite. May 12 11:25:31.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:31.927: INFO: namespace: e2e-tests-projected-p68kj, resource: bindings, ignored listing per whitelist May 12 11:25:32.005: INFO: namespace e2e-tests-projected-p68kj deletion completed in 6.134178012s • [SLOW TEST:16.828 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:25:32.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4a8f913c-9443-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:25:32.378: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-vmslz" to be "success or failure" May 12 11:25:32.399: INFO: Pod "pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.612316ms May 12 11:25:34.405: INFO: Pod "pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026690854s May 12 11:25:36.519: INFO: Pod "pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140753087s STEP: Saw pod success May 12 11:25:36.519: INFO: Pod "pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:25:36.836: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 11:25:37.059: INFO: Waiting for pod pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c to disappear May 12 11:25:37.213: INFO: Pod pod-projected-configmaps-4a904871-9443-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:25:37.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vmslz" for this suite. May 12 11:25:43.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:43.278: INFO: namespace: e2e-tests-projected-vmslz, resource: bindings, ignored listing per whitelist May 12 11:25:43.326: INFO: namespace e2e-tests-projected-vmslz deletion completed in 6.110363871s • [SLOW TEST:11.321 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:25:43.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:25:43.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2rx8c" for this suite. May 12 11:25:49.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:49.700: INFO: namespace: e2e-tests-services-2rx8c, resource: bindings, ignored listing per whitelist May 12 11:25:49.705: INFO: namespace e2e-tests-services-2rx8c deletion completed in 6.089965148s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.378 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:25:49.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:25:49.927: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.320613ms) May 12 11:25:49.930: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.101488ms) May 12 11:25:49.933: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.028606ms) May 12 11:25:49.936: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.770129ms) May 12 11:25:49.939: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.016703ms) May 12 11:25:49.942: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.934391ms) May 12 11:25:49.945: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.900324ms) May 12 11:25:49.948: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.757006ms) May 12 11:25:49.951: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.069165ms) May 12 11:25:49.954: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.165531ms) May 12 11:25:49.957: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.158041ms) May 12 11:25:49.960: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.037674ms) May 12 11:25:50.101: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 140.979672ms) May 12 11:25:50.104: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.123521ms) May 12 11:25:50.107: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.551437ms) May 12 11:25:50.110: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.728355ms) May 12 11:25:50.112: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.564283ms) May 12 11:25:50.115: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.416387ms) May 12 11:25:50.117: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.267787ms) May 12 11:25:50.120: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.244312ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:25:50.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-l9b7w" for this suite. May 12 11:25:56.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:56.238: INFO: namespace: e2e-tests-proxy-l9b7w, resource: bindings, ignored listing per whitelist May 12 11:25:56.248: INFO: namespace e2e-tests-proxy-l9b7w deletion completed in 6.12564446s • [SLOW TEST:6.543 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:25:56.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:25:56.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-z2rhv" to be "success or failure" May 12 11:25:56.423: INFO: Pod "downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 90.820733ms May 12 11:25:58.427: INFO: Pod "downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094854627s May 12 11:26:00.963: INFO: Pod "downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630545333s May 12 11:26:02.977: INFO: Pod "downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.645041166s STEP: Saw pod success May 12 11:26:02.977: INFO: Pod "downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:26:02.986: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:26:02.999: INFO: Waiting for pod downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c to disappear May 12 11:26:03.010: INFO: Pod downwardapi-volume-58e2e427-9443-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:26:03.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z2rhv" for this suite. May 12 11:26:09.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:26:09.205: INFO: namespace: e2e-tests-projected-z2rhv, resource: bindings, ignored listing per whitelist May 12 11:26:09.229: INFO: namespace e2e-tests-projected-z2rhv deletion completed in 6.217163422s • [SLOW TEST:12.980 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:26:09.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-60aff781-9443-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:26:09.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-mhdjm" to be "success or failure" May 12 11:26:09.556: INFO: Pod "pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.592095ms May 12 11:26:11.560: INFO: Pod "pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030218301s May 12 11:26:13.729: INFO: Pod "pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.199493033s May 12 11:26:15.733: INFO: Pod "pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203241698s STEP: Saw pod success May 12 11:26:15.733: INFO: Pod "pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:26:15.736: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 11:26:16.851: INFO: Waiting for pod pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c to disappear May 12 11:26:16.930: INFO: Pod pod-configmaps-60c1d487-9443-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:26:16.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mhdjm" for this suite. May 12 11:26:27.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:26:27.867: INFO: namespace: e2e-tests-configmap-mhdjm, resource: bindings, ignored listing per whitelist May 12 11:26:27.896: INFO: namespace e2e-tests-configmap-mhdjm deletion completed in 10.962818701s • [SLOW TEST:18.666 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:26:27.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-g6jxd STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:26:28.542: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:26:59.313: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.27 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-g6jxd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:26:59.313: INFO: >>> kubeConfig: /root/.kube/config I0512 11:26:59.888779 6 log.go:172] (0xc002576000) (0xc0018ffc20) Create stream I0512 11:26:59.888801 6 log.go:172] (0xc002576000) (0xc0018ffc20) Stream added, broadcasting: 1 I0512 11:26:59.891295 6 log.go:172] (0xc002576000) Reply frame received for 1 I0512 11:26:59.891349 6 log.go:172] (0xc002576000) (0xc000e23040) Create stream I0512 11:26:59.891366 6 log.go:172] (0xc002576000) (0xc000e23040) Stream added, broadcasting: 3 I0512 11:26:59.892423 6 log.go:172] (0xc002576000) Reply frame received for 3 I0512 11:26:59.892449 6 log.go:172] (0xc002576000) (0xc0018ffcc0) Create stream I0512 11:26:59.892457 6 log.go:172] (0xc002576000) (0xc0018ffcc0) Stream added, broadcasting: 5 I0512 11:26:59.893293 6 log.go:172] (0xc002576000) Reply frame received for 5 I0512 11:27:00.954549 6 log.go:172] (0xc002576000) Data frame received for 5 I0512 11:27:00.954606 6 log.go:172] (0xc0018ffcc0) (5) Data frame handling I0512 11:27:00.954652 6 log.go:172] (0xc002576000) Data frame received for 3 I0512 11:27:00.954679 6 log.go:172] (0xc000e23040) (3) Data frame handling I0512 11:27:00.954723 6 log.go:172] (0xc000e23040) (3) Data frame sent I0512 11:27:00.954748 6 log.go:172] (0xc002576000) Data frame received for 3 I0512 11:27:00.954768 6 log.go:172] (0xc000e23040) (3) Data frame handling I0512 11:27:00.956766 6 log.go:172] (0xc002576000) Data frame received for 1 I0512 11:27:00.956793 6 log.go:172] (0xc0018ffc20) (1) Data frame handling I0512 11:27:00.956808 6 log.go:172] (0xc0018ffc20) (1) Data frame sent I0512 11:27:00.956822 6 log.go:172] (0xc002576000) (0xc0018ffc20) Stream removed, broadcasting: 1 I0512 11:27:00.956910 6 log.go:172] (0xc002576000) (0xc0018ffc20) Stream removed, broadcasting: 1 I0512 11:27:00.956932 6 log.go:172] (0xc002576000) (0xc000e23040) Stream removed, broadcasting: 3 I0512 11:27:00.957302 6 log.go:172] (0xc002576000) Go away received I0512 11:27:00.957363 6 log.go:172] (0xc002576000) (0xc0018ffcc0) Stream removed, broadcasting: 5 May 12 11:27:00.957: INFO: Found all expected endpoints: [netserver-0] May 12 11:27:00.960: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.100 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-g6jxd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:27:00.960: INFO: >>> kubeConfig: /root/.kube/config I0512 11:27:00.996056 6 log.go:172] (0xc0009a6bb0) (0xc000e232c0) Create stream I0512 11:27:00.996111 6 log.go:172] (0xc0009a6bb0) (0xc000e232c0) Stream added, broadcasting: 1 I0512 11:27:00.999798 6 log.go:172] (0xc0009a6bb0) Reply frame received for 1 I0512 11:27:00.999847 6 log.go:172] (0xc0009a6bb0) (0xc001c21f40) Create stream I0512 11:27:00.999869 6 log.go:172] (0xc0009a6bb0) (0xc001c21f40) Stream added, broadcasting: 3 I0512 11:27:01.000984 6 log.go:172] (0xc0009a6bb0) Reply frame received for 3 I0512 11:27:01.001003 6 log.go:172] (0xc0009a6bb0) (0xc000e23360) Create stream I0512 11:27:01.001010 6 log.go:172] (0xc0009a6bb0) (0xc000e23360) Stream added, broadcasting: 5 I0512 11:27:01.002191 6 log.go:172] (0xc0009a6bb0) Reply frame received for 5 I0512 11:27:02.086208 6 log.go:172] (0xc0009a6bb0) Data frame received for 3 I0512 11:27:02.086240 6 log.go:172] (0xc001c21f40) (3) Data frame handling I0512 11:27:02.086252 6 log.go:172] (0xc001c21f40) (3) Data frame sent I0512 11:27:02.086267 6 log.go:172] (0xc0009a6bb0) Data frame received for 3 I0512 11:27:02.086299 6 log.go:172] (0xc001c21f40) (3) Data frame handling I0512 11:27:02.086333 6 log.go:172] (0xc0009a6bb0) Data frame received for 5 I0512 11:27:02.086356 6 log.go:172] (0xc000e23360) (5) Data frame handling I0512 11:27:02.088219 6 log.go:172] (0xc0009a6bb0) Data frame received for 1 I0512 11:27:02.088276 6 log.go:172] (0xc000e232c0) (1) Data frame handling I0512 11:27:02.088315 6 log.go:172] (0xc000e232c0) (1) Data frame sent I0512 11:27:02.088383 6 log.go:172] (0xc0009a6bb0) (0xc000e232c0) Stream removed, broadcasting: 1 I0512 11:27:02.088442 6 log.go:172] (0xc0009a6bb0) Go away received I0512 11:27:02.088598 6 log.go:172] (0xc0009a6bb0) (0xc000e232c0) Stream removed, broadcasting: 1 I0512 11:27:02.088629 6 log.go:172] (0xc0009a6bb0) (0xc001c21f40) Stream removed, broadcasting: 3 I0512 11:27:02.088650 6 log.go:172] (0xc0009a6bb0) (0xc000e23360) Stream removed, broadcasting: 5 May 12 11:27:02.088: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:27:02.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-g6jxd" for this suite. May 12 11:27:28.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:27:28.141: INFO: namespace: e2e-tests-pod-network-test-g6jxd, resource: bindings, ignored listing per whitelist May 12 11:27:28.169: INFO: namespace e2e-tests-pod-network-test-g6jxd deletion completed in 26.076805931s • [SLOW TEST:60.273 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:27:28.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 11:27:40.617: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:40.632: INFO: Pod pod-with-poststart-http-hook still exists May 12 11:27:42.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:42.635: INFO: Pod pod-with-poststart-http-hook still exists May 12 11:27:44.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:44.636: INFO: Pod pod-with-poststart-http-hook still exists May 12 11:27:46.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:46.635: INFO: Pod pod-with-poststart-http-hook still exists May 12 11:27:48.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:48.636: INFO: Pod pod-with-poststart-http-hook still exists May 12 11:27:50.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:50.636: INFO: Pod pod-with-poststart-http-hook still exists May 12 11:27:52.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 11:27:52.635: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:27:52.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mx69g" for this suite. May 12 11:28:17.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:28:17.063: INFO: namespace: e2e-tests-container-lifecycle-hook-mx69g, resource: bindings, ignored listing per whitelist May 12 11:28:17.105: INFO: namespace e2e-tests-container-lifecycle-hook-mx69g deletion completed in 24.468018523s • [SLOW TEST:48.936 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:28:17.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 12 11:28:24.395: INFO: Pod name wrapped-volume-race-b1167bf0-9443-11ea-92b2-0242ac11001c: Found 0 pods out of 5 May 12 11:28:29.400: INFO: Pod name wrapped-volume-race-b1167bf0-9443-11ea-92b2-0242ac11001c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b1167bf0-9443-11ea-92b2-0242ac11001c in namespace e2e-tests-emptydir-wrapper-mjp22, will wait for the garbage collector to delete the pods May 12 11:31:03.478: INFO: Deleting ReplicationController wrapped-volume-race-b1167bf0-9443-11ea-92b2-0242ac11001c took: 6.785252ms May 12 11:31:04.079: INFO: Terminating ReplicationController wrapped-volume-race-b1167bf0-9443-11ea-92b2-0242ac11001c pods took: 600.207952ms STEP: Creating RC which spawns configmap-volume pods May 12 11:31:53.117: INFO: Pod name wrapped-volume-race-2d78b158-9444-11ea-92b2-0242ac11001c: Found 0 pods out of 5 May 12 11:31:58.125: INFO: Pod name wrapped-volume-race-2d78b158-9444-11ea-92b2-0242ac11001c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2d78b158-9444-11ea-92b2-0242ac11001c in namespace e2e-tests-emptydir-wrapper-mjp22, will wait for the garbage collector to delete the pods May 12 11:34:13.355: INFO: Deleting ReplicationController wrapped-volume-race-2d78b158-9444-11ea-92b2-0242ac11001c took: 34.022956ms May 12 11:34:13.655: INFO: Terminating ReplicationController wrapped-volume-race-2d78b158-9444-11ea-92b2-0242ac11001c pods took: 300.241903ms STEP: Creating RC which spawns configmap-volume pods May 12 11:35:03.078: INFO: Pod name wrapped-volume-race-9e77d1d9-9444-11ea-92b2-0242ac11001c: Found 0 pods out of 5 May 12 11:35:08.089: INFO: Pod name wrapped-volume-race-9e77d1d9-9444-11ea-92b2-0242ac11001c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9e77d1d9-9444-11ea-92b2-0242ac11001c in namespace e2e-tests-emptydir-wrapper-mjp22, will wait for the garbage collector to delete the pods May 12 11:37:33.472: INFO: Deleting ReplicationController wrapped-volume-race-9e77d1d9-9444-11ea-92b2-0242ac11001c took: 5.76972ms May 12 11:37:33.572: INFO: Terminating ReplicationController wrapped-volume-race-9e77d1d9-9444-11ea-92b2-0242ac11001c pods took: 100.163908ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:38:24.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mjp22" for this suite. May 12 11:38:38.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:38:38.703: INFO: namespace: e2e-tests-emptydir-wrapper-mjp22, resource: bindings, ignored listing per whitelist May 12 11:38:38.756: INFO: namespace e2e-tests-emptydir-wrapper-mjp22 deletion completed in 14.106054802s • [SLOW TEST:621.650 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:38:38.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-1f6445ce-9445-11ea-92b2-0242ac11001c STEP: Creating secret with name s-test-opt-upd-1f64463b-9445-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1f6445ce-9445-11ea-92b2-0242ac11001c STEP: Updating secret s-test-opt-upd-1f64463b-9445-11ea-92b2-0242ac11001c STEP: Creating secret with name s-test-opt-create-1f64466a-9445-11ea-92b2-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:39:52.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f7vxl" for this suite. May 12 11:40:20.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:40:20.966: INFO: namespace: e2e-tests-projected-f7vxl, resource: bindings, ignored listing per whitelist May 12 11:40:21.127: INFO: namespace e2e-tests-projected-f7vxl deletion completed in 28.375454467s • [SLOW TEST:102.371 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:40:21.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:40:21.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-lxzs4" to be "success or failure" May 12 11:40:21.553: INFO: Pod "downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.769515ms May 12 11:40:23.557: INFO: Pod "downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01778507s May 12 11:40:25.560: INFO: Pod "downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020782688s May 12 11:40:27.564: INFO: Pod "downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024616387s STEP: Saw pod success May 12 11:40:27.564: INFO: Pod "downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:40:27.567: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:40:27.611: INFO: Waiting for pod downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c to disappear May 12 11:40:27.618: INFO: Pod downwardapi-volume-5c8d803f-9445-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:40:27.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lxzs4" for this suite. May 12 11:40:33.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:40:33.717: INFO: namespace: e2e-tests-projected-lxzs4, resource: bindings, ignored listing per whitelist May 12 11:40:33.722: INFO: namespace e2e-tests-projected-lxzs4 deletion completed in 6.101259819s • [SLOW TEST:12.595 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:40:33.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 12 11:40:33.928: INFO: Waiting up to 5m0s for pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c" in namespace "e2e-tests-var-expansion-xbsdl" to be "success or failure" May 12 11:40:33.960: INFO: Pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.111624ms May 12 11:40:35.963: INFO: Pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035189023s May 12 11:40:37.966: INFO: Pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038155428s May 12 11:40:39.969: INFO: Pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041746423s May 12 11:40:42.125: INFO: Pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.197245542s STEP: Saw pod success May 12 11:40:42.125: INFO: Pod "var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:40:42.127: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 11:40:42.298: INFO: Waiting for pod var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c to disappear May 12 11:40:42.300: INFO: Pod var-expansion-63f81b1b-9445-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:40:42.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xbsdl" for this suite. May 12 11:40:48.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:40:48.622: INFO: namespace: e2e-tests-var-expansion-xbsdl, resource: bindings, ignored listing per whitelist May 12 11:40:48.656: INFO: namespace e2e-tests-var-expansion-xbsdl deletion completed in 6.354530629s • [SLOW TEST:14.934 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:40:48.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:41:18.993: INFO: Container started at 2020-05-12 11:40:53 +0000 UTC, pod became ready at 2020-05-12 11:41:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:41:18.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xqlrl" for this suite. May 12 11:41:41.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:41:41.060: INFO: namespace: e2e-tests-container-probe-xqlrl, resource: bindings, ignored listing per whitelist May 12 11:41:41.067: INFO: namespace e2e-tests-container-probe-xqlrl deletion completed in 22.069626621s • [SLOW TEST:52.410 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:41:41.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 12 11:41:41.189: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 11:41:41.204: INFO: Waiting for terminating namespaces to be deleted... May 12 11:41:41.206: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 12 11:41:41.209: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 12 11:41:41.209: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:41:41.209: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 11:41:41.209: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:41:41.209: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 11:41:41.209: INFO: Container coredns ready: true, restart count 0 May 12 11:41:41.209: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 12 11:41:41.213: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 11:41:41.213: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:41:41.213: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 11:41:41.213: INFO: Container coredns ready: true, restart count 0 May 12 11:41:41.213: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 11:41:41.213: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e451b1aba3133], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:41:42.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-wr9sv" for this suite. May 12 11:41:48.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:41:48.350: INFO: namespace: e2e-tests-sched-pred-wr9sv, resource: bindings, ignored listing per whitelist May 12 11:41:48.419: INFO: namespace e2e-tests-sched-pred-wr9sv deletion completed in 6.133818944s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.352 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:41:48.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 11:41:48.715: INFO: Waiting up to 5m0s for pod "pod-907de495-9445-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-j4zzv" to be "success or failure" May 12 11:41:48.776: INFO: Pod "pod-907de495-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 60.554521ms May 12 11:41:50.961: INFO: Pod "pod-907de495-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246265697s May 12 11:41:52.985: INFO: Pod "pod-907de495-9445-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.270015816s May 12 11:41:54.988: INFO: Pod "pod-907de495-9445-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.27286928s STEP: Saw pod success May 12 11:41:54.988: INFO: Pod "pod-907de495-9445-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:41:54.990: INFO: Trying to get logs from node hunter-worker2 pod pod-907de495-9445-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:41:55.114: INFO: Waiting for pod pod-907de495-9445-11ea-92b2-0242ac11001c to disappear May 12 11:41:55.140: INFO: Pod pod-907de495-9445-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:41:55.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j4zzv" for this suite. May 12 11:42:01.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:42:01.227: INFO: namespace: e2e-tests-emptydir-j4zzv, resource: bindings, ignored listing per whitelist May 12 11:42:01.264: INFO: namespace e2e-tests-emptydir-j4zzv deletion completed in 6.12068499s • [SLOW TEST:12.845 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:42:01.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 11:42:01.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:01.622: INFO: Number of nodes with available pods: 0 May 12 11:42:01.622: INFO: Node hunter-worker is running more than one daemon pod May 12 11:42:02.627: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:02.630: INFO: Number of nodes with available pods: 0 May 12 11:42:02.630: INFO: Node hunter-worker is running more than one daemon pod May 12 11:42:04.139: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:04.432: INFO: Number of nodes with available pods: 0 May 12 11:42:04.432: INFO: Node hunter-worker is running more than one daemon pod May 12 11:42:04.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:04.645: INFO: Number of nodes with available pods: 0 May 12 11:42:04.645: INFO: Node hunter-worker is running more than one daemon pod May 12 11:42:05.625: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:05.627: INFO: Number of nodes with available pods: 0 May 12 11:42:05.627: INFO: Node hunter-worker is running more than one daemon pod May 12 11:42:06.627: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:06.630: INFO: Number of nodes with available pods: 2 May 12 11:42:06.630: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 12 11:42:06.893: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:06.897: INFO: Number of nodes with available pods: 1 May 12 11:42:06.897: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:07.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:07.904: INFO: Number of nodes with available pods: 1 May 12 11:42:07.904: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:08.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:08.904: INFO: Number of nodes with available pods: 1 May 12 11:42:08.904: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:09.902: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:09.906: INFO: Number of nodes with available pods: 1 May 12 11:42:09.906: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:10.903: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:10.906: INFO: Number of nodes with available pods: 1 May 12 11:42:10.906: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:11.902: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:11.906: INFO: Number of nodes with available pods: 1 May 12 11:42:11.906: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:12.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:12.903: INFO: Number of nodes with available pods: 1 May 12 11:42:12.903: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:13.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:13.904: INFO: Number of nodes with available pods: 1 May 12 11:42:13.904: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:14.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:14.903: INFO: Number of nodes with available pods: 1 May 12 11:42:14.903: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:15.902: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:15.906: INFO: Number of nodes with available pods: 1 May 12 11:42:15.906: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:16.902: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:16.905: INFO: Number of nodes with available pods: 1 May 12 11:42:16.905: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:17.902: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:17.906: INFO: Number of nodes with available pods: 1 May 12 11:42:17.906: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:19.007: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:19.010: INFO: Number of nodes with available pods: 1 May 12 11:42:19.010: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:19.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:19.904: INFO: Number of nodes with available pods: 1 May 12 11:42:19.904: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:20.954: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:20.959: INFO: Number of nodes with available pods: 1 May 12 11:42:20.959: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:21.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:21.903: INFO: Number of nodes with available pods: 1 May 12 11:42:21.903: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:22.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:22.905: INFO: Number of nodes with available pods: 1 May 12 11:42:22.905: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:23.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:23.904: INFO: Number of nodes with available pods: 1 May 12 11:42:23.904: INFO: Node hunter-worker2 is running more than one daemon pod May 12 11:42:24.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:42:24.904: INFO: Number of nodes with available pods: 2 May 12 11:42:24.904: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-d4b48, will wait for the garbage collector to delete the pods May 12 11:42:24.964: INFO: Deleting DaemonSet.extensions daemon-set took: 5.745407ms May 12 11:42:25.064: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.242352ms May 12 11:42:29.967: INFO: Number of nodes with available pods: 0 May 12 11:42:29.967: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:42:29.970: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-d4b48/daemonsets","resourceVersion":"10159004"},"items":null} May 12 11:42:29.972: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-d4b48/pods","resourceVersion":"10159004"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:42:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-d4b48" for this suite. May 12 11:42:36.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:42:36.180: INFO: namespace: e2e-tests-daemonsets-d4b48, resource: bindings, ignored listing per whitelist May 12 11:42:36.204: INFO: namespace e2e-tests-daemonsets-d4b48 deletion completed in 6.220799733s • [SLOW TEST:34.940 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:42:36.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0512 11:42:46.569430 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:42:46.569: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:42:46.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wl4k6" for this suite. May 12 11:42:52.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:42:52.613: INFO: namespace: e2e-tests-gc-wl4k6, resource: bindings, ignored listing per whitelist May 12 11:42:52.666: INFO: namespace e2e-tests-gc-wl4k6 deletion completed in 6.094766093s • [SLOW TEST:16.462 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:42:52.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b6b87484-9445-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b6b87484-9445-11ea-92b2-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:44:07.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rg4s8" for this suite. May 12 11:44:31.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:44:31.936: INFO: namespace: e2e-tests-projected-rg4s8, resource: bindings, ignored listing per whitelist May 12 11:44:31.992: INFO: namespace e2e-tests-projected-rg4s8 deletion completed in 24.265732651s • [SLOW TEST:99.326 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:44:31.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 12 11:44:32.843: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 11:44:33.315: INFO: Waiting for terminating namespaces to be deleted... May 12 11:44:33.360: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 12 11:44:33.366: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 12 11:44:33.366: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:44:33.366: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 11:44:33.366: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:44:33.366: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 11:44:33.366: INFO: Container coredns ready: true, restart count 0 May 12 11:44:33.366: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 12 11:44:33.372: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 11:44:33.372: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:44:33.372: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 11:44:33.372: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:44:33.372: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 11:44:33.372: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 12 11:44:33.732: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 12 11:44:33.732: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 12 11:44:33.732: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 12 11:44:33.732: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 12 11:44:33.732: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 12 11:44:33.732: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-f2ea76b1-9445-11ea-92b2-0242ac11001c.160e4543467c5bec], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-vksdg/filler-pod-f2ea76b1-9445-11ea-92b2-0242ac11001c to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2ea76b1-9445-11ea-92b2-0242ac11001c.160e454396780903], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2ea76b1-9445-11ea-92b2-0242ac11001c.160e4543fb2c6136], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2ea76b1-9445-11ea-92b2-0242ac11001c.160e45441ae2971f], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2eb824c-9445-11ea-92b2-0242ac11001c.160e45434d5d6392], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-vksdg/filler-pod-f2eb824c-9445-11ea-92b2-0242ac11001c to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2eb824c-9445-11ea-92b2-0242ac11001c.160e4543fcdde2b8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2eb824c-9445-11ea-92b2-0242ac11001c.160e45444def5c7a], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2eb824c-9445-11ea-92b2-0242ac11001c.160e454461be06e2], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e4544b4ae7d9b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:44:41.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-vksdg" for this suite. May 12 11:44:51.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:44:51.892: INFO: namespace: e2e-tests-sched-pred-vksdg, resource: bindings, ignored listing per whitelist May 12 11:44:51.937: INFO: namespace e2e-tests-sched-pred-vksdg deletion completed in 10.363558227s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:19.944 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:44:51.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 12 11:44:52.573: INFO: Waiting up to 5m0s for pod "client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c" in namespace "e2e-tests-containers-jbjfg" to be "success or failure" May 12 11:44:52.721: INFO: Pod "client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 148.1213ms May 12 11:44:54.731: INFO: Pod "client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158300188s May 12 11:44:56.835: INFO: Pod "client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261527056s May 12 11:44:58.864: INFO: Pod "client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.290818368s STEP: Saw pod success May 12 11:44:58.864: INFO: Pod "client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:44:58.866: INFO: Trying to get logs from node hunter-worker pod client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:44:59.002: INFO: Waiting for pod client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c to disappear May 12 11:44:59.037: INFO: Pod client-containers-fe179ea1-9445-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:44:59.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jbjfg" for this suite. May 12 11:45:07.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:45:07.490: INFO: namespace: e2e-tests-containers-jbjfg, resource: bindings, ignored listing per whitelist May 12 11:45:07.506: INFO: namespace e2e-tests-containers-jbjfg deletion completed in 8.125333488s • [SLOW TEST:15.569 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:45:07.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:45:18.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-6tgl7" for this suite. May 12 11:45:25.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:45:25.278: INFO: namespace: e2e-tests-namespaces-6tgl7, resource: bindings, ignored listing per whitelist May 12 11:45:25.286: INFO: namespace e2e-tests-namespaces-6tgl7 deletion completed in 6.295086156s STEP: Destroying namespace "e2e-tests-nsdeletetest-fbpgb" for this suite. May 12 11:45:25.289: INFO: Namespace e2e-tests-nsdeletetest-fbpgb was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-n5c7t" for this suite. May 12 11:45:31.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:45:31.493: INFO: namespace: e2e-tests-nsdeletetest-n5c7t, resource: bindings, ignored listing per whitelist May 12 11:45:31.546: INFO: namespace e2e-tests-nsdeletetest-n5c7t deletion completed in 6.257159831s • [SLOW TEST:24.040 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:45:31.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:45:31.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-56zn2' May 12 11:45:37.024: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 11:45:37.024: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 12 11:45:37.086: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-856mw] May 12 11:45:37.086: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-856mw" in namespace "e2e-tests-kubectl-56zn2" to be "running and ready" May 12 11:45:37.164: INFO: Pod "e2e-test-nginx-rc-856mw": Phase="Pending", Reason="", readiness=false. Elapsed: 78.289956ms May 12 11:45:40.536: INFO: Pod "e2e-test-nginx-rc-856mw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450511853s May 12 11:45:42.541: INFO: Pod "e2e-test-nginx-rc-856mw": Phase="Pending", Reason="", readiness=false. Elapsed: 5.455137899s May 12 11:45:44.545: INFO: Pod "e2e-test-nginx-rc-856mw": Phase="Running", Reason="", readiness=true. Elapsed: 7.458832271s May 12 11:45:44.545: INFO: Pod "e2e-test-nginx-rc-856mw" satisfied condition "running and ready" May 12 11:45:44.545: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-856mw] May 12 11:45:44.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-56zn2' May 12 11:45:44.980: INFO: stderr: "" May 12 11:45:44.980: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 12 11:45:44.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-56zn2' May 12 11:45:45.120: INFO: stderr: "" May 12 11:45:45.120: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:45:45.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-56zn2" for this suite. May 12 11:46:07.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:46:08.035: INFO: namespace: e2e-tests-kubectl-56zn2, resource: bindings, ignored listing per whitelist May 12 11:46:08.071: INFO: namespace e2e-tests-kubectl-56zn2 deletion completed in 22.947181836s • [SLOW TEST:36.525 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:46:08.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 12 11:46:08.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5pqf' May 12 11:46:08.701: INFO: stderr: "" May 12 11:46:08.701: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 12 11:46:09.944: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:09.944: INFO: Found 0 / 1 May 12 11:46:10.704: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:10.704: INFO: Found 0 / 1 May 12 11:46:11.725: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:11.725: INFO: Found 0 / 1 May 12 11:46:12.704: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:12.704: INFO: Found 0 / 1 May 12 11:46:13.782: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:13.783: INFO: Found 0 / 1 May 12 11:46:14.759: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:14.759: INFO: Found 0 / 1 May 12 11:46:15.827: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:15.827: INFO: Found 0 / 1 May 12 11:46:17.068: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:17.069: INFO: Found 1 / 1 May 12 11:46:17.069: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 11:46:17.099: INFO: Selector matched 1 pods for map[app:redis] May 12 11:46:17.099: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 12 11:46:17.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dw8v6 redis-master --namespace=e2e-tests-kubectl-x5pqf' May 12 11:46:17.947: INFO: stderr: "" May 12 11:46:17.947: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 11:46:15.028 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 11:46:15.029 # Server started, Redis version 3.2.12\n1:M 12 May 11:46:15.029 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 11:46:15.029 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 12 11:46:17.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dw8v6 redis-master --namespace=e2e-tests-kubectl-x5pqf --tail=1' May 12 11:46:18.225: INFO: stderr: "" May 12 11:46:18.225: INFO: stdout: "1:M 12 May 11:46:15.029 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 12 11:46:18.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dw8v6 redis-master --namespace=e2e-tests-kubectl-x5pqf --limit-bytes=1' May 12 11:46:18.333: INFO: stderr: "" May 12 11:46:18.333: INFO: stdout: " " STEP: exposing timestamps May 12 11:46:18.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dw8v6 redis-master --namespace=e2e-tests-kubectl-x5pqf --tail=1 --timestamps' May 12 11:46:18.434: INFO: stderr: "" May 12 11:46:18.434: INFO: stdout: "2020-05-12T11:46:15.029461899Z 1:M 12 May 11:46:15.029 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 12 11:46:20.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dw8v6 redis-master --namespace=e2e-tests-kubectl-x5pqf --since=1s' May 12 11:46:21.046: INFO: stderr: "" May 12 11:46:21.046: INFO: stdout: "" May 12 11:46:21.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-dw8v6 redis-master --namespace=e2e-tests-kubectl-x5pqf --since=24h' May 12 11:46:21.243: INFO: stderr: "" May 12 11:46:21.243: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 11:46:15.028 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 11:46:15.029 # Server started, Redis version 3.2.12\n1:M 12 May 11:46:15.029 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 11:46:15.029 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 12 11:46:21.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5pqf' May 12 11:46:21.468: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:46:21.468: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 12 11:46:21.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-x5pqf' May 12 11:46:22.098: INFO: stderr: "No resources found.\n" May 12 11:46:22.098: INFO: stdout: "" May 12 11:46:22.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-x5pqf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 11:46:22.380: INFO: stderr: "" May 12 11:46:22.380: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:46:22.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x5pqf" for this suite. May 12 11:46:45.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:46:45.291: INFO: namespace: e2e-tests-kubectl-x5pqf, resource: bindings, ignored listing per whitelist May 12 11:46:45.303: INFO: namespace e2e-tests-kubectl-x5pqf deletion completed in 22.920110499s • [SLOW TEST:37.232 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:46:45.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 11:46:46.382: INFO: Waiting up to 5m0s for pod "pod-41fa52a7-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-kl77k" to be "success or failure" May 12 11:46:46.442: INFO: Pod "pod-41fa52a7-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 60.824442ms May 12 11:46:48.854: INFO: Pod "pod-41fa52a7-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472477314s May 12 11:46:50.857: INFO: Pod "pod-41fa52a7-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475036012s May 12 11:46:52.869: INFO: Pod "pod-41fa52a7-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.487401573s STEP: Saw pod success May 12 11:46:52.869: INFO: Pod "pod-41fa52a7-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:46:52.871: INFO: Trying to get logs from node hunter-worker pod pod-41fa52a7-9446-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:46:53.174: INFO: Waiting for pod pod-41fa52a7-9446-11ea-92b2-0242ac11001c to disappear May 12 11:46:53.396: INFO: Pod pod-41fa52a7-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:46:53.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kl77k" for this suite. May 12 11:46:59.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:46:59.470: INFO: namespace: e2e-tests-emptydir-kl77k, resource: bindings, ignored listing per whitelist May 12 11:46:59.501: INFO: namespace e2e-tests-emptydir-kl77k deletion completed in 6.101082725s • [SLOW TEST:14.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:46:59.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:46:59.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-5hclq" to be "success or failure" May 12 11:46:59.608: INFO: Pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.789656ms May 12 11:47:01.613: INFO: Pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020698836s May 12 11:47:03.617: INFO: Pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024322332s May 12 11:47:06.022: INFO: Pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430193155s May 12 11:47:08.026: INFO: Pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.433663456s STEP: Saw pod success May 12 11:47:08.026: INFO: Pod "downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:47:08.029: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:47:08.775: INFO: Waiting for pod downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c to disappear May 12 11:47:08.806: INFO: Pod downwardapi-volume-49d8e04f-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:47:08.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5hclq" for this suite. May 12 11:47:14.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:14.990: INFO: namespace: e2e-tests-downward-api-5hclq, resource: bindings, ignored listing per whitelist May 12 11:47:15.035: INFO: namespace e2e-tests-downward-api-5hclq deletion completed in 6.225176322s • [SLOW TEST:15.533 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:47:15.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-f8rmw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f8rmw to expose endpoints map[] May 12 11:47:15.363: INFO: Get endpoints failed (10.720964ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 12 11:47:16.367: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f8rmw exposes endpoints map[] (1.014730308s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-f8rmw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f8rmw to expose endpoints map[pod1:[100]] May 12 11:47:20.581: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.207327979s elapsed, will retry) May 12 11:47:21.587: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f8rmw exposes endpoints map[pod1:[100]] (5.213620681s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-f8rmw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f8rmw to expose endpoints map[pod1:[100] pod2:[101]] May 12 11:47:25.867: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f8rmw exposes endpoints map[pod1:[100] pod2:[101]] (4.274813703s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-f8rmw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f8rmw to expose endpoints map[pod2:[101]] May 12 11:47:26.923: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f8rmw exposes endpoints map[pod2:[101]] (1.053626424s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-f8rmw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-f8rmw to expose endpoints map[] May 12 11:47:28.609: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-f8rmw exposes endpoints map[] (1.233484753s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:47:28.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-f8rmw" for this suite. May 12 11:47:36.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:36.913: INFO: namespace: e2e-tests-services-f8rmw, resource: bindings, ignored listing per whitelist May 12 11:47:36.982: INFO: namespace e2e-tests-services-f8rmw deletion completed in 8.087095135s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:21.947 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:47:36.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-60bdae05-9446-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:47:38.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-pg7h8" to be "success or failure" May 12 11:47:38.030: INFO: Pod "pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.870494ms May 12 11:47:40.034: INFO: Pod "pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023313679s May 12 11:47:42.088: INFO: Pod "pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.077746785s May 12 11:47:44.092: INFO: Pod "pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081606499s STEP: Saw pod success May 12 11:47:44.092: INFO: Pod "pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:47:44.095: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 11:47:44.257: INFO: Waiting for pod pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c to disappear May 12 11:47:44.279: INFO: Pod pod-configmaps-60bfd034-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:47:44.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pg7h8" for this suite. May 12 11:47:52.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:52.303: INFO: namespace: e2e-tests-configmap-pg7h8, resource: bindings, ignored listing per whitelist May 12 11:47:52.369: INFO: namespace e2e-tests-configmap-pg7h8 deletion completed in 8.087000713s • [SLOW TEST:15.387 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:47:52.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-69b6c341-9446-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:47:53.187: INFO: Waiting up to 5m0s for pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-cztkw" to be "success or failure" May 12 11:47:53.196: INFO: Pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.208477ms May 12 11:47:55.539: INFO: Pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351802982s May 12 11:47:57.542: INFO: Pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354961483s May 12 11:47:59.545: INFO: Pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357650088s May 12 11:48:01.549: INFO: Pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.362035428s STEP: Saw pod success May 12 11:48:01.549: INFO: Pod "pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:48:01.552: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 11:48:02.077: INFO: Waiting for pod pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c to disappear May 12 11:48:02.250: INFO: Pod pod-configmaps-69becd6c-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:48:02.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cztkw" for this suite. May 12 11:48:10.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:48:10.843: INFO: namespace: e2e-tests-configmap-cztkw, resource: bindings, ignored listing per whitelist May 12 11:48:10.857: INFO: namespace e2e-tests-configmap-cztkw deletion completed in 8.602247906s • [SLOW TEST:18.487 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:48:10.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:48:11.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-9chgb" to be "success or failure" May 12 11:48:11.537: INFO: Pod "downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.218064ms May 12 11:48:13.540: INFO: Pod "downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058795624s May 12 11:48:15.547: INFO: Pod "downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065710547s May 12 11:48:17.592: INFO: Pod "downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110245265s STEP: Saw pod success May 12 11:48:17.592: INFO: Pod "downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:48:17.594: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:48:18.197: INFO: Waiting for pod downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c to disappear May 12 11:48:18.243: INFO: Pod downwardapi-volume-74952099-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:48:18.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9chgb" for this suite. May 12 11:48:26.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:48:26.457: INFO: namespace: e2e-tests-projected-9chgb, resource: bindings, ignored listing per whitelist May 12 11:48:26.493: INFO: namespace e2e-tests-projected-9chgb deletion completed in 8.247613289s • [SLOW TEST:15.637 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:48:26.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 11:48:43.038: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:43.114: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:45.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:45.118: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:47.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:47.118: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:49.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:49.118: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:51.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:51.119: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:53.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:53.118: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:55.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:55.119: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:57.114: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:57.118: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:48:59.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:48:59.259: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:49:01.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:49:01.156: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:49:03.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:49:03.118: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:49:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8vprh" for this suite. May 12 11:49:27.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:49:27.538: INFO: namespace: e2e-tests-container-lifecycle-hook-8vprh, resource: bindings, ignored listing per whitelist May 12 11:49:27.671: INFO: namespace e2e-tests-container-lifecycle-hook-8vprh deletion completed in 24.550011443s • [SLOW TEST:61.177 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:49:27.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:49:28.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-gkdm7" to be "success or failure" May 12 11:49:28.286: INFO: Pod "downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.294228ms May 12 11:49:30.289: INFO: Pod "downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019604594s May 12 11:49:32.444: INFO: Pod "downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17390431s May 12 11:49:34.448: INFO: Pod "downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177911013s STEP: Saw pod success May 12 11:49:34.448: INFO: Pod "downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:49:34.451: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:49:34.565: INFO: Waiting for pod downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c to disappear May 12 11:49:34.592: INFO: Pod downwardapi-volume-a27884f8-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:49:34.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gkdm7" for this suite. May 12 11:49:40.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:49:40.675: INFO: namespace: e2e-tests-downward-api-gkdm7, resource: bindings, ignored listing per whitelist May 12 11:49:40.698: INFO: namespace e2e-tests-downward-api-gkdm7 deletion completed in 6.101836593s • [SLOW TEST:13.027 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:49:40.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 12 11:49:41.548: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5gtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-5gtx5/configmaps/e2e-watch-test-resource-version,UID:aa5a2c59-9446-11ea-99e8-0242ac110002,ResourceVersion:10160330,Generation:0,CreationTimestamp:2020-05-12 11:49:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:49:41.548: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5gtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-5gtx5/configmaps/e2e-watch-test-resource-version,UID:aa5a2c59-9446-11ea-99e8-0242ac110002,ResourceVersion:10160331,Generation:0,CreationTimestamp:2020-05-12 11:49:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:49:41.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-5gtx5" for this suite. May 12 11:49:49.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:49:49.712: INFO: namespace: e2e-tests-watch-5gtx5, resource: bindings, ignored listing per whitelist May 12 11:49:49.751: INFO: namespace e2e-tests-watch-5gtx5 deletion completed in 8.199595078s • [SLOW TEST:9.053 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:49:49.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 11:49:50.546: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-kswcj" to be "success or failure" May 12 11:49:50.845: INFO: Pod "downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 299.298001ms May 12 11:49:53.277: INFO: Pod "downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730372702s May 12 11:49:55.281: INFO: Pod "downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.734664025s May 12 11:49:57.283: INFO: Pod "downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.737213202s STEP: Saw pod success May 12 11:49:57.283: INFO: Pod "downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:49:57.286: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c container client-container: STEP: delete the pod May 12 11:49:58.020: INFO: Waiting for pod downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c to disappear May 12 11:49:58.239: INFO: Pod downwardapi-volume-afbfafeb-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:49:58.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kswcj" for this suite. May 12 11:50:04.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:50:04.545: INFO: namespace: e2e-tests-downward-api-kswcj, resource: bindings, ignored listing per whitelist May 12 11:50:04.572: INFO: namespace e2e-tests-downward-api-kswcj deletion completed in 6.329068652s • [SLOW TEST:14.821 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:50:04.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 11:50:04.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ts5n9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ts5n9/configmaps/e2e-watch-test-label-changed,UID:b82ffffd-9446-11ea-99e8-0242ac110002,ResourceVersion:10160407,Generation:0,CreationTimestamp:2020-05-12 11:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:50:04.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ts5n9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ts5n9/configmaps/e2e-watch-test-label-changed,UID:b82ffffd-9446-11ea-99e8-0242ac110002,ResourceVersion:10160408,Generation:0,CreationTimestamp:2020-05-12 11:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 11:50:04.721: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ts5n9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ts5n9/configmaps/e2e-watch-test-label-changed,UID:b82ffffd-9446-11ea-99e8-0242ac110002,ResourceVersion:10160409,Generation:0,CreationTimestamp:2020-05-12 11:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 11:50:15.043: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ts5n9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ts5n9/configmaps/e2e-watch-test-label-changed,UID:b82ffffd-9446-11ea-99e8-0242ac110002,ResourceVersion:10160430,Generation:0,CreationTimestamp:2020-05-12 11:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:50:15.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ts5n9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ts5n9/configmaps/e2e-watch-test-label-changed,UID:b82ffffd-9446-11ea-99e8-0242ac110002,ResourceVersion:10160431,Generation:0,CreationTimestamp:2020-05-12 11:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 12 11:50:15.043: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-ts5n9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ts5n9/configmaps/e2e-watch-test-label-changed,UID:b82ffffd-9446-11ea-99e8-0242ac110002,ResourceVersion:10160432,Generation:0,CreationTimestamp:2020-05-12 11:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:50:15.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ts5n9" for this suite. May 12 11:50:25.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:50:25.904: INFO: namespace: e2e-tests-watch-ts5n9, resource: bindings, ignored listing per whitelist May 12 11:50:25.953: INFO: namespace e2e-tests-watch-ts5n9 deletion completed in 10.77296661s • [SLOW TEST:21.381 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:50:25.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:50:27.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 12 11:50:27.772: INFO: stderr: "" May 12 11:50:27.772: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 12 11:50:27.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jgkg5' May 12 11:50:29.270: INFO: stderr: "" May 12 11:50:29.270: INFO: stdout: "replicationcontroller/redis-master created\n" May 12 11:50:29.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jgkg5' May 12 11:50:30.572: INFO: stderr: "" May 12 11:50:30.572: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 12 11:50:31.578: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:31.578: INFO: Found 0 / 1 May 12 11:50:32.875: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:32.876: INFO: Found 0 / 1 May 12 11:50:33.575: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:33.575: INFO: Found 0 / 1 May 12 11:50:35.218: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:35.218: INFO: Found 0 / 1 May 12 11:50:35.576: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:35.576: INFO: Found 0 / 1 May 12 11:50:36.578: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:36.578: INFO: Found 0 / 1 May 12 11:50:37.575: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:37.575: INFO: Found 1 / 1 May 12 11:50:37.575: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 11:50:37.577: INFO: Selector matched 1 pods for map[app:redis] May 12 11:50:37.577: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 11:50:37.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gr5r8 --namespace=e2e-tests-kubectl-jgkg5' May 12 11:50:37.679: INFO: stderr: "" May 12 11:50:37.679: INFO: stdout: "Name: redis-master-gr5r8\nNamespace: e2e-tests-kubectl-jgkg5\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Tue, 12 May 2020 11:50:29 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.46\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://575f77dfb9fbe9f6eefb4bafbd7f15fe17e8f4670053ffd6f8e12c91732d90f1\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 May 2020 11:50:35 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gl2rs (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gl2rs:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gl2rs\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned e2e-tests-kubectl-jgkg5/redis-master-gr5r8 to hunter-worker2\n Normal Pulled 5s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 12 11:50:37.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-jgkg5' May 12 11:50:37.850: INFO: stderr: "" May 12 11:50:37.850: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-jgkg5\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-gr5r8\n" May 12 11:50:37.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-jgkg5' May 12 11:50:37.946: INFO: stderr: "" May 12 11:50:37.946: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-jgkg5\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.25.28\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.46:6379\nSession Affinity: None\nEvents: \n" May 12 11:50:37.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 12 11:50:38.077: INFO: stderr: "" May 12 11:50:38.077: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 May 2020 11:50:37 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 May 2020 11:50:37 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 May 2020 11:50:37 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 May 2020 11:50:37 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 12 11:50:38.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-jgkg5' May 12 11:50:38.187: INFO: stderr: "" May 12 11:50:38.187: INFO: stdout: "Name: e2e-tests-kubectl-jgkg5\nLabels: e2e-framework=kubectl\n e2e-run=acf66b8d-9436-11ea-92b2-0242ac11001c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:50:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jgkg5" for this suite. May 12 11:51:04.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:51:04.318: INFO: namespace: e2e-tests-kubectl-jgkg5, resource: bindings, ignored listing per whitelist May 12 11:51:04.359: INFO: namespace e2e-tests-kubectl-jgkg5 deletion completed in 26.168344346s • [SLOW TEST:38.406 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:51:04.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:51:04.564: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:51:11.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8d868" for this suite. May 12 11:52:03.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:52:03.356: INFO: namespace: e2e-tests-pods-8d868, resource: bindings, ignored listing per whitelist May 12 11:52:03.501: INFO: namespace e2e-tests-pods-8d868 deletion completed in 52.204013253s • [SLOW TEST:59.141 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:52:03.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 11:52:03.707: INFO: Waiting up to 5m0s for pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-dw7ws" to be "success or failure" May 12 11:52:03.741: INFO: Pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.484008ms May 12 11:52:05.745: INFO: Pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037580555s May 12 11:52:07.750: INFO: Pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042327096s May 12 11:52:09.855: INFO: Pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.147036209s May 12 11:52:11.859: INFO: Pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151954907s STEP: Saw pod success May 12 11:52:11.859: INFO: Pod "pod-ff1c4a70-9446-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:52:11.907: INFO: Trying to get logs from node hunter-worker2 pod pod-ff1c4a70-9446-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:52:12.932: INFO: Waiting for pod pod-ff1c4a70-9446-11ea-92b2-0242ac11001c to disappear May 12 11:52:13.070: INFO: Pod pod-ff1c4a70-9446-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:52:13.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dw7ws" for this suite. May 12 11:52:25.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:52:25.963: INFO: namespace: e2e-tests-emptydir-dw7ws, resource: bindings, ignored listing per whitelist May 12 11:52:25.994: INFO: namespace e2e-tests-emptydir-dw7ws deletion completed in 12.919074821s • [SLOW TEST:22.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:52:25.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0dbd4b86-9447-11ea-92b2-0242ac11001c STEP: Creating configMap with name cm-test-opt-upd-0dbd4c0e-9447-11ea-92b2-0242ac11001c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0dbd4b86-9447-11ea-92b2-0242ac11001c STEP: Updating configmap cm-test-opt-upd-0dbd4c0e-9447-11ea-92b2-0242ac11001c STEP: Creating configMap with name cm-test-opt-create-0dbd4c40-9447-11ea-92b2-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:54:15.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gbwnq" for this suite. May 12 11:54:41.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:54:41.453: INFO: namespace: e2e-tests-configmap-gbwnq, resource: bindings, ignored listing per whitelist May 12 11:54:41.482: INFO: namespace e2e-tests-configmap-gbwnq deletion completed in 26.110559026s • [SLOW TEST:135.487 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:54:41.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9vs2q STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:54:41.583: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:55:14.291: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.49:8080/dial?request=hostName&protocol=http&host=10.244.2.48&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9vs2q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:55:14.292: INFO: >>> kubeConfig: /root/.kube/config I0512 11:55:14.325543 6 log.go:172] (0xc0000eb080) (0xc00219db80) Create stream I0512 11:55:14.325592 6 log.go:172] (0xc0000eb080) (0xc00219db80) Stream added, broadcasting: 1 I0512 11:55:14.328345 6 log.go:172] (0xc0000eb080) Reply frame received for 1 I0512 11:55:14.328377 6 log.go:172] (0xc0000eb080) (0xc00219dc20) Create stream I0512 11:55:14.328391 6 log.go:172] (0xc0000eb080) (0xc00219dc20) Stream added, broadcasting: 3 I0512 11:55:14.329729 6 log.go:172] (0xc0000eb080) Reply frame received for 3 I0512 11:55:14.329770 6 log.go:172] (0xc0000eb080) (0xc00209a3c0) Create stream I0512 11:55:14.329788 6 log.go:172] (0xc0000eb080) (0xc00209a3c0) Stream added, broadcasting: 5 I0512 11:55:14.330783 6 log.go:172] (0xc0000eb080) Reply frame received for 5 I0512 11:55:14.403146 6 log.go:172] (0xc0000eb080) Data frame received for 3 I0512 11:55:14.403176 6 log.go:172] (0xc00219dc20) (3) Data frame handling I0512 11:55:14.403204 6 log.go:172] (0xc00219dc20) (3) Data frame sent I0512 11:55:14.403660 6 log.go:172] (0xc0000eb080) Data frame received for 5 I0512 11:55:14.403680 6 log.go:172] (0xc00209a3c0) (5) Data frame handling I0512 11:55:14.403714 6 log.go:172] (0xc0000eb080) Data frame received for 3 I0512 11:55:14.403751 6 log.go:172] (0xc00219dc20) (3) Data frame handling I0512 11:55:14.406224 6 log.go:172] (0xc0000eb080) Data frame received for 1 I0512 11:55:14.406246 6 log.go:172] (0xc00219db80) (1) Data frame handling I0512 11:55:14.406264 6 log.go:172] (0xc00219db80) (1) Data frame sent I0512 11:55:14.406293 6 log.go:172] (0xc0000eb080) (0xc00219db80) Stream removed, broadcasting: 1 I0512 11:55:14.406326 6 log.go:172] (0xc0000eb080) Go away received I0512 11:55:14.406450 6 log.go:172] (0xc0000eb080) (0xc00219db80) Stream removed, broadcasting: 1 I0512 11:55:14.406471 6 log.go:172] (0xc0000eb080) (0xc00219dc20) Stream removed, broadcasting: 3 I0512 11:55:14.406481 6 log.go:172] (0xc0000eb080) (0xc00209a3c0) Stream removed, broadcasting: 5 May 12 11:55:14.406: INFO: Waiting for endpoints: map[] May 12 11:55:14.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.49:8080/dial?request=hostName&protocol=http&host=10.244.1.130&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9vs2q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:55:14.467: INFO: >>> kubeConfig: /root/.kube/config I0512 11:55:14.495223 6 log.go:172] (0xc0009a68f0) (0xc0023308c0) Create stream I0512 11:55:14.495266 6 log.go:172] (0xc0009a68f0) (0xc0023308c0) Stream added, broadcasting: 1 I0512 11:55:14.497800 6 log.go:172] (0xc0009a68f0) Reply frame received for 1 I0512 11:55:14.497866 6 log.go:172] (0xc0009a68f0) (0xc0020f86e0) Create stream I0512 11:55:14.497885 6 log.go:172] (0xc0009a68f0) (0xc0020f86e0) Stream added, broadcasting: 3 I0512 11:55:14.499289 6 log.go:172] (0xc0009a68f0) Reply frame received for 3 I0512 11:55:14.499343 6 log.go:172] (0xc0009a68f0) (0xc00209a460) Create stream I0512 11:55:14.499372 6 log.go:172] (0xc0009a68f0) (0xc00209a460) Stream added, broadcasting: 5 I0512 11:55:14.500403 6 log.go:172] (0xc0009a68f0) Reply frame received for 5 I0512 11:55:14.580890 6 log.go:172] (0xc0009a68f0) Data frame received for 3 I0512 11:55:14.580931 6 log.go:172] (0xc0020f86e0) (3) Data frame handling I0512 11:55:14.580959 6 log.go:172] (0xc0020f86e0) (3) Data frame sent I0512 11:55:14.581825 6 log.go:172] (0xc0009a68f0) Data frame received for 5 I0512 11:55:14.581863 6 log.go:172] (0xc00209a460) (5) Data frame handling I0512 11:55:14.581986 6 log.go:172] (0xc0009a68f0) Data frame received for 3 I0512 11:55:14.582008 6 log.go:172] (0xc0020f86e0) (3) Data frame handling I0512 11:55:14.583405 6 log.go:172] (0xc0009a68f0) Data frame received for 1 I0512 11:55:14.583433 6 log.go:172] (0xc0023308c0) (1) Data frame handling I0512 11:55:14.583466 6 log.go:172] (0xc0023308c0) (1) Data frame sent I0512 11:55:14.583496 6 log.go:172] (0xc0009a68f0) (0xc0023308c0) Stream removed, broadcasting: 1 I0512 11:55:14.583520 6 log.go:172] (0xc0009a68f0) Go away received I0512 11:55:14.583646 6 log.go:172] (0xc0009a68f0) (0xc0023308c0) Stream removed, broadcasting: 1 I0512 11:55:14.583682 6 log.go:172] (0xc0009a68f0) (0xc0020f86e0) Stream removed, broadcasting: 3 I0512 11:55:14.583691 6 log.go:172] (0xc0009a68f0) (0xc00209a460) Stream removed, broadcasting: 5 May 12 11:55:14.583: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:55:14.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9vs2q" for this suite. May 12 11:55:40.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:55:40.773: INFO: namespace: e2e-tests-pod-network-test-9vs2q, resource: bindings, ignored listing per whitelist May 12 11:55:40.776: INFO: namespace e2e-tests-pod-network-test-9vs2q deletion completed in 26.188714068s • [SLOW TEST:59.294 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:55:40.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-80b7cabf-9447-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:55:41.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-6599t" to be "success or failure" May 12 11:55:41.457: INFO: Pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 83.389272ms May 12 11:55:43.461: INFO: Pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086924176s May 12 11:55:45.465: INFO: Pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090942359s May 12 11:55:47.468: INFO: Pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.094005246s May 12 11:55:49.470: INFO: Pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096885782s STEP: Saw pod success May 12 11:55:49.471: INFO: Pod "pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:55:49.473: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 11:55:49.494: INFO: Waiting for pod pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c to disappear May 12 11:55:49.511: INFO: Pod pod-configmaps-80b976eb-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:55:49.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6599t" for this suite. May 12 11:55:55.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:55:55.807: INFO: namespace: e2e-tests-configmap-6599t, resource: bindings, ignored listing per whitelist May 12 11:55:55.965: INFO: namespace e2e-tests-configmap-6599t deletion completed in 6.452113686s • [SLOW TEST:15.189 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:55:55.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0512 11:56:08.718083 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:56:08.718: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:56:08.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lhp25" for this suite. May 12 11:56:25.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:56:25.721: INFO: namespace: e2e-tests-gc-lhp25, resource: bindings, ignored listing per whitelist May 12 11:56:25.760: INFO: namespace e2e-tests-gc-lhp25 deletion completed in 16.766558233s • [SLOW TEST:29.794 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:56:25.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 12 11:56:25.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 12 11:56:35.417: INFO: stderr: "" May 12 11:56:35.417: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:56:35.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dnscd" for this suite. May 12 11:56:41.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:56:41.601: INFO: namespace: e2e-tests-kubectl-dnscd, resource: bindings, ignored listing per whitelist May 12 11:56:41.604: INFO: namespace e2e-tests-kubectl-dnscd deletion completed in 6.184340104s • [SLOW TEST:15.844 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:56:41.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 11:56:52.113: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:56:52.162: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:56:54.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:56:54.166: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:56:56.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:56:56.166: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:56:58.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:56:58.165: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:00.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:00.288: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:02.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:02.167: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:04.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:04.167: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:06.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:06.166: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:08.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:08.166: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:10.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:10.169: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:12.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:12.552: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:14.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:14.165: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:16.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:16.166: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:18.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:18.167: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:20.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:20.166: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:57:22.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:57:22.185: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:57:22.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vmmd8" for this suite. May 12 11:57:48.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:57:48.465: INFO: namespace: e2e-tests-container-lifecycle-hook-vmmd8, resource: bindings, ignored listing per whitelist May 12 11:57:48.470: INFO: namespace e2e-tests-container-lifecycle-hook-vmmd8 deletion completed in 26.275860379s • [SLOW TEST:66.866 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:57:48.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:57:49.653: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 12 11:57:49.658: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-f82mq/daemonsets","resourceVersion":"10161728"},"items":null} May 12 11:57:49.660: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-f82mq/pods","resourceVersion":"10161728"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:57:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-f82mq" for this suite. May 12 11:57:57.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:57:58.028: INFO: namespace: e2e-tests-daemonsets-f82mq, resource: bindings, ignored listing per whitelist May 12 11:57:58.030: INFO: namespace e2e-tests-daemonsets-f82mq deletion completed in 8.361182321s S [SKIPPING] [9.560 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 11:57:49.653: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:57:58.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 12 11:57:58.463: INFO: Waiting up to 5m0s for pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-containers-dz5md" to be "success or failure" May 12 11:57:58.817: INFO: Pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 354.344308ms May 12 11:58:00.820: INFO: Pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357752174s May 12 11:58:02.825: INFO: Pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361908859s May 12 11:58:05.345: INFO: Pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.88188853s May 12 11:58:07.347: INFO: Pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.884618564s STEP: Saw pod success May 12 11:58:07.347: INFO: Pod "client-containers-d28d382b-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:58:07.349: INFO: Trying to get logs from node hunter-worker pod client-containers-d28d382b-9447-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 11:58:07.391: INFO: Waiting for pod client-containers-d28d382b-9447-11ea-92b2-0242ac11001c to disappear May 12 11:58:07.394: INFO: Pod client-containers-d28d382b-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:58:07.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dz5md" for this suite. May 12 11:58:15.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:58:15.460: INFO: namespace: e2e-tests-containers-dz5md, resource: bindings, ignored listing per whitelist May 12 11:58:15.480: INFO: namespace e2e-tests-containers-dz5md deletion completed in 8.083118111s • [SLOW TEST:17.450 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:58:15.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-dce8e1a0-9447-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 11:58:16.008: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-8f987" to be "success or failure" May 12 11:58:16.010: INFO: Pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304558ms May 12 11:58:18.014: INFO: Pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006452424s May 12 11:58:20.212: INFO: Pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204254114s May 12 11:58:22.583: INFO: Pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.575565422s May 12 11:58:24.588: INFO: Pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.580360144s STEP: Saw pod success May 12 11:58:24.588: INFO: Pod "pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:58:24.591: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 11:58:24.921: INFO: Waiting for pod pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c to disappear May 12 11:58:24.950: INFO: Pod pod-projected-configmaps-dcead48e-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:58:24.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8f987" for this suite. May 12 11:58:31.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:58:31.043: INFO: namespace: e2e-tests-projected-8f987, resource: bindings, ignored listing per whitelist May 12 11:58:31.091: INFO: namespace e2e-tests-projected-8f987 deletion completed in 6.137008748s • [SLOW TEST:15.610 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:58:31.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 11:58:31.208: INFO: Waiting up to 5m0s for pod "downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-7lbkf" to be "success or failure" May 12 11:58:31.216: INFO: Pod "downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217259ms May 12 11:58:33.467: INFO: Pod "downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259203666s May 12 11:58:35.470: INFO: Pod "downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261891959s May 12 11:58:37.473: INFO: Pod "downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264763799s STEP: Saw pod success May 12 11:58:37.473: INFO: Pod "downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:58:37.475: INFO: Trying to get logs from node hunter-worker pod downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 11:58:37.620: INFO: Waiting for pod downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c to disappear May 12 11:58:37.677: INFO: Pod downward-api-e6140dd3-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:58:37.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7lbkf" for this suite. May 12 11:58:44.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:58:44.175: INFO: namespace: e2e-tests-downward-api-7lbkf, resource: bindings, ignored listing per whitelist May 12 11:58:44.222: INFO: namespace e2e-tests-downward-api-7lbkf deletion completed in 6.519524019s • [SLOW TEST:13.131 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:58:44.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-ee016cdf-9447-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:58:44.497: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-dqt8f" to be "success or failure" May 12 11:58:44.502: INFO: Pod "pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.765033ms May 12 11:58:46.507: INFO: Pod "pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009560298s May 12 11:58:48.512: INFO: Pod "pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014586538s May 12 11:58:50.515: INFO: Pod "pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01763907s STEP: Saw pod success May 12 11:58:50.515: INFO: Pod "pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:58:50.516: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 11:58:50.539: INFO: Waiting for pod pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c to disappear May 12 11:58:50.664: INFO: Pod pod-projected-secrets-ee01b60e-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:58:50.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dqt8f" for this suite. May 12 11:58:56.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:58:56.708: INFO: namespace: e2e-tests-projected-dqt8f, resource: bindings, ignored listing per whitelist May 12 11:58:56.749: INFO: namespace e2e-tests-projected-dqt8f deletion completed in 6.082339912s • [SLOW TEST:12.527 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:58:56.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-f562dc5d-9447-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:58:56.887: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-knd4g" to be "success or failure" May 12 11:58:56.891: INFO: Pod "pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.8339ms May 12 11:58:58.895: INFO: Pod "pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007975578s May 12 11:59:00.899: INFO: Pod "pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.011697158s May 12 11:59:02.902: INFO: Pod "pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014992019s STEP: Saw pod success May 12 11:59:02.902: INFO: Pod "pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:59:02.905: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 11:59:03.155: INFO: Waiting for pod pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c to disappear May 12 11:59:03.458: INFO: Pod pod-projected-secrets-f564356b-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:59:03.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-knd4g" for this suite. May 12 11:59:11.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:59:12.044: INFO: namespace: e2e-tests-projected-knd4g, resource: bindings, ignored listing per whitelist May 12 11:59:12.056: INFO: namespace e2e-tests-projected-knd4g deletion completed in 8.303964953s • [SLOW TEST:15.307 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:59:12.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 12 11:59:12.415: INFO: Waiting up to 5m0s for pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c" in namespace "e2e-tests-var-expansion-ww2zl" to be "success or failure" May 12 11:59:12.420: INFO: Pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.596478ms May 12 11:59:14.445: INFO: Pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030448614s May 12 11:59:16.448: INFO: Pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033763305s May 12 11:59:18.499: INFO: Pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.084484753s May 12 11:59:20.503: INFO: Pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088274967s STEP: Saw pod success May 12 11:59:20.503: INFO: Pod "var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:59:20.505: INFO: Trying to get logs from node hunter-worker pod var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 11:59:21.185: INFO: Waiting for pod var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c to disappear May 12 11:59:21.811: INFO: Pod var-expansion-fe9c3126-9447-11ea-92b2-0242ac11001c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:59:21.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-ww2zl" for this suite. May 12 11:59:28.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:59:28.259: INFO: namespace: e2e-tests-var-expansion-ww2zl, resource: bindings, ignored listing per whitelist May 12 11:59:28.281: INFO: namespace e2e-tests-var-expansion-ww2zl deletion completed in 6.466597183s • [SLOW TEST:16.225 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:59:28.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 11:59:28.838: INFO: Waiting up to 5m0s for pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-downward-api-qkszn" to be "success or failure" May 12 11:59:29.002: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 164.181104ms May 12 11:59:31.048: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209458532s May 12 11:59:33.051: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21312983s May 12 11:59:35.054: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216189332s May 12 11:59:37.707: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.868709736s May 12 11:59:39.709: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.871279792s STEP: Saw pod success May 12 11:59:39.710: INFO: Pod "downward-api-08583af4-9448-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:59:39.711: INFO: Trying to get logs from node hunter-worker pod downward-api-08583af4-9448-11ea-92b2-0242ac11001c container dapi-container: STEP: delete the pod May 12 11:59:39.953: INFO: Waiting for pod downward-api-08583af4-9448-11ea-92b2-0242ac11001c to disappear May 12 11:59:40.170: INFO: Pod downward-api-08583af4-9448-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:59:40.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qkszn" for this suite. May 12 11:59:48.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:59:48.279: INFO: namespace: e2e-tests-downward-api-qkszn, resource: bindings, ignored listing per whitelist May 12 11:59:48.338: INFO: namespace e2e-tests-downward-api-qkszn deletion completed in 8.164168718s • [SLOW TEST:20.057 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 11:59:48.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-14df217a-9448-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 11:59:50.135: INFO: Waiting up to 5m0s for pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-secrets-9cz44" to be "success or failure" May 12 11:59:50.408: INFO: Pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 273.140544ms May 12 11:59:52.413: INFO: Pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277284514s May 12 11:59:54.432: INFO: Pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29629068s May 12 11:59:56.680: INFO: Pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544719563s May 12 11:59:58.683: INFO: Pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.547985003s STEP: Saw pod success May 12 11:59:58.683: INFO: Pod "pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 11:59:58.686: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 11:59:58.816: INFO: Waiting for pod pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c to disappear May 12 11:59:59.061: INFO: Pod pod-secrets-14e8bdc8-9448-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 11:59:59.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9cz44" for this suite. May 12 12:00:07.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:00:07.591: INFO: namespace: e2e-tests-secrets-9cz44, resource: bindings, ignored listing per whitelist May 12 12:00:07.600: INFO: namespace e2e-tests-secrets-9cz44 deletion completed in 8.534368647s • [SLOW TEST:19.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:00:07.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0512 12:00:48.883525 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:00:48.883: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:00:48.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rbgvl" for this suite. May 12 12:01:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:01:01.144: INFO: namespace: e2e-tests-gc-rbgvl, resource: bindings, ignored listing per whitelist May 12 12:01:01.177: INFO: namespace e2e-tests-gc-rbgvl deletion completed in 12.290580722s • [SLOW TEST:53.577 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:01:01.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 12:01:08.066: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3f93051f-9448-11ea-92b2-0242ac11001c" May 12 12:01:08.066: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3f93051f-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-pods-8zqf5" to be "terminated due to deadline exceeded" May 12 12:01:08.089: INFO: Pod "pod-update-activedeadlineseconds-3f93051f-9448-11ea-92b2-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 23.462774ms May 12 12:01:10.285: INFO: Pod "pod-update-activedeadlineseconds-3f93051f-9448-11ea-92b2-0242ac11001c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.219652975s May 12 12:01:10.285: INFO: Pod "pod-update-activedeadlineseconds-3f93051f-9448-11ea-92b2-0242ac11001c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:01:10.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8zqf5" for this suite. May 12 12:01:16.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:01:16.322: INFO: namespace: e2e-tests-pods-8zqf5, resource: bindings, ignored listing per whitelist May 12 12:01:16.362: INFO: namespace e2e-tests-pods-8zqf5 deletion completed in 6.074080405s • [SLOW TEST:15.185 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:01:16.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-48b75792-9448-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 12:01:16.743: INFO: Waiting up to 5m0s for pod "pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-configmap-8kpvn" to be "success or failure" May 12 12:01:16.750: INFO: Pod "pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.607323ms May 12 12:01:18.777: INFO: Pod "pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034101274s May 12 12:01:20.781: INFO: Pod "pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038025062s May 12 12:01:22.784: INFO: Pod "pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041187929s STEP: Saw pod success May 12 12:01:22.784: INFO: Pod "pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 12:01:22.786: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 12:01:22.862: INFO: Waiting for pod pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c to disappear May 12 12:01:22.871: INFO: Pod pod-configmaps-48b7ec4b-9448-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:01:22.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8kpvn" for this suite. May 12 12:01:30.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:01:30.911: INFO: namespace: e2e-tests-configmap-8kpvn, resource: bindings, ignored listing per whitelist May 12 12:01:30.952: INFO: namespace e2e-tests-configmap-8kpvn deletion completed in 8.078301116s • [SLOW TEST:14.590 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:01:30.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 12 12:01:31.090: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:01:31.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nxprl" for this suite. May 12 12:01:37.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:01:37.261: INFO: namespace: e2e-tests-kubectl-nxprl, resource: bindings, ignored listing per whitelist May 12 12:01:37.314: INFO: namespace e2e-tests-kubectl-nxprl deletion completed in 6.129337718s • [SLOW TEST:6.361 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:01:37.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 12 12:01:37.956: INFO: created pod pod-service-account-defaultsa May 12 12:01:37.956: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 12 12:01:37.982: INFO: created pod pod-service-account-mountsa May 12 12:01:37.982: INFO: pod pod-service-account-mountsa service account token volume mount: true May 12 12:01:38.119: INFO: created pod pod-service-account-nomountsa May 12 12:01:38.119: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 12 12:01:38.183: INFO: created pod pod-service-account-defaultsa-mountspec May 12 12:01:38.183: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 12 12:01:38.293: INFO: created pod pod-service-account-mountsa-mountspec May 12 12:01:38.293: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 12 12:01:38.307: INFO: created pod pod-service-account-nomountsa-mountspec May 12 12:01:38.307: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 12 12:01:38.350: INFO: created pod pod-service-account-defaultsa-nomountspec May 12 12:01:38.350: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 12 12:01:38.391: INFO: created pod pod-service-account-mountsa-nomountspec May 12 12:01:38.391: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 12 12:01:38.463: INFO: created pod pod-service-account-nomountsa-nomountspec May 12 12:01:38.463: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:01:38.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-tshb5" for this suite. May 12 12:02:12.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:02:12.728: INFO: namespace: e2e-tests-svcaccounts-tshb5, resource: bindings, ignored listing per whitelist May 12 12:02:12.735: INFO: namespace e2e-tests-svcaccounts-tshb5 deletion completed in 34.137229736s • [SLOW TEST:35.421 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:02:12.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6a4270eb-9448-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 12:02:12.975: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-qh4bd" to be "success or failure" May 12 12:02:13.084: INFO: Pod "pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 108.53235ms May 12 12:02:15.088: INFO: Pod "pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112427912s May 12 12:02:17.091: INFO: Pod "pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115828842s May 12 12:02:19.245: INFO: Pod "pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.269720967s STEP: Saw pod success May 12 12:02:19.245: INFO: Pod "pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 12:02:19.248: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 12:02:19.594: INFO: Waiting for pod pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c to disappear May 12 12:02:20.065: INFO: Pod pod-projected-configmaps-6a44ef5a-9448-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:02:20.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qh4bd" for this suite. May 12 12:02:28.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:02:28.477: INFO: namespace: e2e-tests-projected-qh4bd, resource: bindings, ignored listing per whitelist May 12 12:02:28.627: INFO: namespace e2e-tests-projected-qh4bd deletion completed in 8.557272881s • [SLOW TEST:15.891 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:02:28.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:02:35.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9mdj6" for this suite. May 12 12:02:41.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:02:41.838: INFO: namespace: e2e-tests-emptydir-wrapper-9mdj6, resource: bindings, ignored listing per whitelist May 12 12:02:41.896: INFO: namespace e2e-tests-emptydir-wrapper-9mdj6 deletion completed in 6.323537384s • [SLOW TEST:13.269 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:02:41.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 12:02:49.745: INFO: Successfully updated pod "annotationupdate7c2d873f-9448-11ea-92b2-0242ac11001c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:02:51.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t47xl" for this suite. May 12 12:03:17.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:03:17.815: INFO: namespace: e2e-tests-downward-api-t47xl, resource: bindings, ignored listing per whitelist May 12 12:03:17.873: INFO: namespace e2e-tests-downward-api-t47xl deletion completed in 26.094301438s • [SLOW TEST:35.977 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:03:17.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-91101b95-9448-11ea-92b2-0242ac11001c STEP: Creating a pod to test consume secrets May 12 12:03:18.088: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-projected-c7pgn" to be "success or failure" May 12 12:03:18.092: INFO: Pod "pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907827ms May 12 12:03:20.127: INFO: Pod "pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039482719s May 12 12:03:22.131: INFO: Pod "pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043100803s STEP: Saw pod success May 12 12:03:22.131: INFO: Pod "pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 12:03:22.134: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 12:03:22.235: INFO: Waiting for pod pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c to disappear May 12 12:03:22.262: INFO: Pod pod-projected-secrets-91132e0b-9448-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:03:22.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c7pgn" for this suite. May 12 12:03:28.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:03:28.564: INFO: namespace: e2e-tests-projected-c7pgn, resource: bindings, ignored listing per whitelist May 12 12:03:28.702: INFO: namespace e2e-tests-projected-c7pgn deletion completed in 6.43663523s • [SLOW TEST:10.828 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:03:28.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 12:03:39.340: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:03:39.371: INFO: Pod pod-with-prestop-http-hook still exists May 12 12:03:41.372: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:03:41.444: INFO: Pod pod-with-prestop-http-hook still exists May 12 12:03:43.372: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:03:43.375: INFO: Pod pod-with-prestop-http-hook still exists May 12 12:03:45.372: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 12:03:45.402: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:03:45.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zxqs6" for this suite. May 12 12:04:07.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:04:07.815: INFO: namespace: e2e-tests-container-lifecycle-hook-zxqs6, resource: bindings, ignored listing per whitelist May 12 12:04:07.843: INFO: namespace e2e-tests-container-lifecycle-hook-zxqs6 deletion completed in 22.433402963s • [SLOW TEST:39.141 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:04:07.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 12:04:08.632: INFO: Waiting up to 5m0s for pod "pod-af140be9-9448-11ea-92b2-0242ac11001c" in namespace "e2e-tests-emptydir-bj8bd" to be "success or failure" May 12 12:04:08.714: INFO: Pod "pod-af140be9-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 81.756522ms May 12 12:04:10.852: INFO: Pod "pod-af140be9-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220088452s May 12 12:04:12.875: INFO: Pod "pod-af140be9-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243105124s May 12 12:04:14.878: INFO: Pod "pod-af140be9-9448-11ea-92b2-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246100905s May 12 12:04:17.120: INFO: Pod "pod-af140be9-9448-11ea-92b2-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.488635153s STEP: Saw pod success May 12 12:04:17.121: INFO: Pod "pod-af140be9-9448-11ea-92b2-0242ac11001c" satisfied condition "success or failure" May 12 12:04:17.123: INFO: Trying to get logs from node hunter-worker pod pod-af140be9-9448-11ea-92b2-0242ac11001c container test-container: STEP: delete the pod May 12 12:04:17.694: INFO: Waiting for pod pod-af140be9-9448-11ea-92b2-0242ac11001c to disappear May 12 12:04:17.728: INFO: Pod pod-af140be9-9448-11ea-92b2-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:04:17.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bj8bd" for this suite. May 12 12:04:27.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:04:27.979: INFO: namespace: e2e-tests-emptydir-bj8bd, resource: bindings, ignored listing per whitelist May 12 12:04:28.027: INFO: namespace e2e-tests-emptydir-bj8bd deletion completed in 10.296467312s • [SLOW TEST:20.183 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 12:04:28.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nkhtn [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nkhtn STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nkhtn May 12 12:04:29.124: INFO: Found 0 stateful pods, waiting for 1 May 12 12:04:39.128: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 12:04:39.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 12:04:39.738: INFO: stderr: "I0512 12:04:39.246179 3716 log.go:172] (0xc000138790) (0xc0007b7360) Create stream\nI0512 12:04:39.246219 3716 log.go:172] (0xc000138790) (0xc0007b7360) Stream added, broadcasting: 1\nI0512 12:04:39.248086 3716 log.go:172] (0xc000138790) Reply frame received for 1\nI0512 12:04:39.248126 3716 log.go:172] (0xc000138790) (0xc00072a000) Create stream\nI0512 12:04:39.248136 3716 log.go:172] (0xc000138790) (0xc00072a000) Stream added, broadcasting: 3\nI0512 12:04:39.248742 3716 log.go:172] (0xc000138790) Reply frame received for 3\nI0512 12:04:39.248766 3716 log.go:172] (0xc000138790) (0xc0002c6000) Create stream\nI0512 12:04:39.248773 3716 log.go:172] (0xc000138790) (0xc0002c6000) Stream added, broadcasting: 5\nI0512 12:04:39.249511 3716 log.go:172] (0xc000138790) Reply frame received for 5\nI0512 12:04:39.729948 3716 log.go:172] (0xc000138790) Data frame received for 3\nI0512 12:04:39.729967 3716 log.go:172] (0xc00072a000) (3) Data frame handling\nI0512 12:04:39.729976 3716 log.go:172] (0xc00072a000) (3) Data frame sent\nI0512 12:04:39.730150 3716 log.go:172] (0xc000138790) Data frame received for 5\nI0512 12:04:39.730170 3716 log.go:172] (0xc0002c6000) (5) Data frame handling\nI0512 12:04:39.730502 3716 log.go:172] (0xc000138790) Data frame received for 3\nI0512 12:04:39.730528 3716 log.go:172] (0xc00072a000) (3) Data frame handling\nI0512 12:04:39.732648 3716 log.go:172] (0xc000138790) Data frame received for 1\nI0512 12:04:39.732662 3716 log.go:172] (0xc0007b7360) (1) Data frame handling\nI0512 12:04:39.732673 3716 log.go:172] (0xc0007b7360) (1) Data frame sent\nI0512 12:04:39.732686 3716 log.go:172] (0xc000138790) (0xc0007b7360) Stream removed, broadcasting: 1\nI0512 12:04:39.732805 3716 log.go:172] (0xc000138790) (0xc0007b7360) Stream removed, broadcasting: 1\nI0512 12:04:39.732832 3716 log.go:172] (0xc000138790) Go away received\nI0512 12:04:39.732881 3716 log.go:172] (0xc000138790) (0xc00072a000) Stream removed, broadcasting: 3\nI0512 12:04:39.732912 3716 log.go:172] (0xc000138790) (0xc0002c6000) Stream removed, broadcasting: 5\n" May 12 12:04:39.738: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 12:04:39.738: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 12:04:39.876: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 12:04:49.953: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 12:04:49.953: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:04:50.169: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999577s May 12 12:04:51.205: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.849449401s May 12 12:04:52.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.813582317s May 12 12:04:53.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.8091742s May 12 12:04:54.265: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.764968007s May 12 12:04:55.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.753536968s May 12 12:04:56.367: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.711017092s May 12 12:04:57.370: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.651616812s May 12 12:04:58.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.648144321s May 12 12:04:59.524: INFO: Verifying statefulset ss doesn't scale past 1 for another 645.457409ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nkhtn May 12 12:05:00.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:05:00.737: INFO: stderr: "I0512 12:05:00.654283 3738 log.go:172] (0xc000138840) (0xc0007f4640) Create stream\nI0512 12:05:00.654336 3738 log.go:172] (0xc000138840) (0xc0007f4640) Stream added, broadcasting: 1\nI0512 12:05:00.655739 3738 log.go:172] (0xc000138840) Reply frame received for 1\nI0512 12:05:00.655769 3738 log.go:172] (0xc000138840) (0xc0001a6d20) Create stream\nI0512 12:05:00.655780 3738 log.go:172] (0xc000138840) (0xc0001a6d20) Stream added, broadcasting: 3\nI0512 12:05:00.656391 3738 log.go:172] (0xc000138840) Reply frame received for 3\nI0512 12:05:00.656430 3738 log.go:172] (0xc000138840) (0xc0006a6000) Create stream\nI0512 12:05:00.656449 3738 log.go:172] (0xc000138840) (0xc0006a6000) Stream added, broadcasting: 5\nI0512 12:05:00.657000 3738 log.go:172] (0xc000138840) Reply frame received for 5\nI0512 12:05:00.725401 3738 log.go:172] (0xc000138840) Data frame received for 3\nI0512 12:05:00.725424 3738 log.go:172] (0xc0001a6d20) (3) Data frame handling\nI0512 12:05:00.725447 3738 log.go:172] (0xc0001a6d20) (3) Data frame sent\nI0512 12:05:00.725458 3738 log.go:172] (0xc000138840) Data frame received for 3\nI0512 12:05:00.725465 3738 log.go:172] (0xc0001a6d20) (3) Data frame handling\nI0512 12:05:00.725919 3738 log.go:172] (0xc000138840) Data frame received for 5\nI0512 12:05:00.725930 3738 log.go:172] (0xc0006a6000) (5) Data frame handling\nI0512 12:05:00.727098 3738 log.go:172] (0xc000138840) Data frame received for 1\nI0512 12:05:00.727113 3738 log.go:172] (0xc0007f4640) (1) Data frame handling\nI0512 12:05:00.727124 3738 log.go:172] (0xc0007f4640) (1) Data frame sent\nI0512 12:05:00.727181 3738 log.go:172] (0xc000138840) (0xc0007f4640) Stream removed, broadcasting: 1\nI0512 12:05:00.727204 3738 log.go:172] (0xc000138840) Go away received\nI0512 12:05:00.727402 3738 log.go:172] (0xc000138840) (0xc0007f4640) Stream removed, broadcasting: 1\nI0512 12:05:00.727417 3738 log.go:172] (0xc000138840) (0xc0001a6d20) Stream removed, broadcasting: 3\nI0512 12:05:00.727427 3738 log.go:172] (0xc000138840) (0xc0006a6000) Stream removed, broadcasting: 5\n" May 12 12:05:00.737: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 12:05:00.737: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 12:05:00.739: INFO: Found 1 stateful pods, waiting for 3 May 12 12:05:10.744: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 12:05:10.744: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 12:05:10.744: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 12:05:20.744: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 12:05:20.744: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 12:05:20.744: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 12:05:20.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 12:05:20.937: INFO: stderr: "I0512 12:05:20.873056 3759 log.go:172] (0xc000138580) (0xc0006d05a0) Create stream\nI0512 12:05:20.873099 3759 log.go:172] (0xc000138580) (0xc0006d05a0) Stream added, broadcasting: 1\nI0512 12:05:20.875014 3759 log.go:172] (0xc000138580) Reply frame received for 1\nI0512 12:05:20.875060 3759 log.go:172] (0xc000138580) (0xc00063cd20) Create stream\nI0512 12:05:20.875077 3759 log.go:172] (0xc000138580) (0xc00063cd20) Stream added, broadcasting: 3\nI0512 12:05:20.875747 3759 log.go:172] (0xc000138580) Reply frame received for 3\nI0512 12:05:20.875787 3759 log.go:172] (0xc000138580) (0xc0006d0640) Create stream\nI0512 12:05:20.875800 3759 log.go:172] (0xc000138580) (0xc0006d0640) Stream added, broadcasting: 5\nI0512 12:05:20.876532 3759 log.go:172] (0xc000138580) Reply frame received for 5\nI0512 12:05:20.931326 3759 log.go:172] (0xc000138580) Data frame received for 5\nI0512 12:05:20.931357 3759 log.go:172] (0xc000138580) Data frame received for 3\nI0512 12:05:20.931386 3759 log.go:172] (0xc00063cd20) (3) Data frame handling\nI0512 12:05:20.931400 3759 log.go:172] (0xc00063cd20) (3) Data frame sent\nI0512 12:05:20.931408 3759 log.go:172] (0xc000138580) Data frame received for 3\nI0512 12:05:20.931416 3759 log.go:172] (0xc00063cd20) (3) Data frame handling\nI0512 12:05:20.931443 3759 log.go:172] (0xc0006d0640) (5) Data frame handling\nI0512 12:05:20.933252 3759 log.go:172] (0xc000138580) Data frame received for 1\nI0512 12:05:20.933273 3759 log.go:172] (0xc0006d05a0) (1) Data frame handling\nI0512 12:05:20.933294 3759 log.go:172] (0xc0006d05a0) (1) Data frame sent\nI0512 12:05:20.933353 3759 log.go:172] (0xc000138580) (0xc0006d05a0) Stream removed, broadcasting: 1\nI0512 12:05:20.933525 3759 log.go:172] (0xc000138580) (0xc0006d05a0) Stream removed, broadcasting: 1\nI0512 12:05:20.933540 3759 log.go:172] (0xc000138580) (0xc00063cd20) Stream removed, broadcasting: 3\nI0512 12:05:20.933598 3759 log.go:172] (0xc000138580) Go away received\nI0512 12:05:20.933689 3759 log.go:172] (0xc000138580) (0xc0006d0640) Stream removed, broadcasting: 5\n" May 12 12:05:20.937: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 12:05:20.937: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 12:05:20.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 12:05:21.138: INFO: stderr: "I0512 12:05:21.042146 3781 log.go:172] (0xc00076a2c0) (0xc0006d0640) Create stream\nI0512 12:05:21.042213 3781 log.go:172] (0xc00076a2c0) (0xc0006d0640) Stream added, broadcasting: 1\nI0512 12:05:21.043933 3781 log.go:172] (0xc00076a2c0) Reply frame received for 1\nI0512 12:05:21.043971 3781 log.go:172] (0xc00076a2c0) (0xc0005aab40) Create stream\nI0512 12:05:21.043984 3781 log.go:172] (0xc00076a2c0) (0xc0005aab40) Stream added, broadcasting: 3\nI0512 12:05:21.044767 3781 log.go:172] (0xc00076a2c0) Reply frame received for 3\nI0512 12:05:21.044800 3781 log.go:172] (0xc00076a2c0) (0xc000276000) Create stream\nI0512 12:05:21.044812 3781 log.go:172] (0xc00076a2c0) (0xc000276000) Stream added, broadcasting: 5\nI0512 12:05:21.045624 3781 log.go:172] (0xc00076a2c0) Reply frame received for 5\nI0512 12:05:21.132549 3781 log.go:172] (0xc00076a2c0) Data frame received for 5\nI0512 12:05:21.132582 3781 log.go:172] (0xc000276000) (5) Data frame handling\nI0512 12:05:21.132618 3781 log.go:172] (0xc00076a2c0) Data frame received for 3\nI0512 12:05:21.132637 3781 log.go:172] (0xc0005aab40) (3) Data frame handling\nI0512 12:05:21.132654 3781 log.go:172] (0xc0005aab40) (3) Data frame sent\nI0512 12:05:21.132754 3781 log.go:172] (0xc00076a2c0) Data frame received for 3\nI0512 12:05:21.132775 3781 log.go:172] (0xc0005aab40) (3) Data frame handling\nI0512 12:05:21.134206 3781 log.go:172] (0xc00076a2c0) Data frame received for 1\nI0512 12:05:21.134440 3781 log.go:172] (0xc0006d0640) (1) Data frame handling\nI0512 12:05:21.134527 3781 log.go:172] (0xc0006d0640) (1) Data frame sent\nI0512 12:05:21.134559 3781 log.go:172] (0xc00076a2c0) (0xc0006d0640) Stream removed, broadcasting: 1\nI0512 12:05:21.134606 3781 log.go:172] (0xc00076a2c0) Go away received\nI0512 12:05:21.134786 3781 log.go:172] (0xc00076a2c0) (0xc0006d0640) Stream removed, broadcasting: 1\nI0512 12:05:21.134827 3781 log.go:172] (0xc00076a2c0) (0xc0005aab40) Stream removed, broadcasting: 3\nI0512 12:05:21.134843 3781 log.go:172] (0xc00076a2c0) (0xc000276000) Stream removed, broadcasting: 5\n" May 12 12:05:21.138: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 12:05:21.138: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 12:05:21.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 12:05:21.385: INFO: stderr: "I0512 12:05:21.266861 3803 log.go:172] (0xc000764160) (0xc000162f00) Create stream\nI0512 12:05:21.266914 3803 log.go:172] (0xc000764160) (0xc000162f00) Stream added, broadcasting: 1\nI0512 12:05:21.269342 3803 log.go:172] (0xc000764160) Reply frame received for 1\nI0512 12:05:21.269379 3803 log.go:172] (0xc000764160) (0xc0005640a0) Create stream\nI0512 12:05:21.269395 3803 log.go:172] (0xc000764160) (0xc0005640a0) Stream added, broadcasting: 3\nI0512 12:05:21.270053 3803 log.go:172] (0xc000764160) Reply frame received for 3\nI0512 12:05:21.270078 3803 log.go:172] (0xc000764160) (0xc00087f860) Create stream\nI0512 12:05:21.270087 3803 log.go:172] (0xc000764160) (0xc00087f860) Stream added, broadcasting: 5\nI0512 12:05:21.270791 3803 log.go:172] (0xc000764160) Reply frame received for 5\nI0512 12:05:21.376522 3803 log.go:172] (0xc000764160) Data frame received for 5\nI0512 12:05:21.376537 3803 log.go:172] (0xc00087f860) (5) Data frame handling\nI0512 12:05:21.376609 3803 log.go:172] (0xc000764160) Data frame received for 3\nI0512 12:05:21.376646 3803 log.go:172] (0xc0005640a0) (3) Data frame handling\nI0512 12:05:21.376674 3803 log.go:172] (0xc0005640a0) (3) Data frame sent\nI0512 12:05:21.376694 3803 log.go:172] (0xc000764160) Data frame received for 3\nI0512 12:05:21.376710 3803 log.go:172] (0xc0005640a0) (3) Data frame handling\nI0512 12:05:21.378979 3803 log.go:172] (0xc000764160) Data frame received for 1\nI0512 12:05:21.379011 3803 log.go:172] (0xc000162f00) (1) Data frame handling\nI0512 12:05:21.379037 3803 log.go:172] (0xc000162f00) (1) Data frame sent\nI0512 12:05:21.379066 3803 log.go:172] (0xc000764160) (0xc000162f00) Stream removed, broadcasting: 1\nI0512 12:05:21.379091 3803 log.go:172] (0xc000764160) Go away received\nI0512 12:05:21.379382 3803 log.go:172] (0xc000764160) (0xc000162f00) Stream removed, broadcasting: 1\nI0512 12:05:21.379419 3803 log.go:172] (0xc000764160) (0xc0005640a0) Stream removed, broadcasting: 3\nI0512 12:05:21.379442 3803 log.go:172] (0xc000764160) (0xc00087f860) Stream removed, broadcasting: 5\n" May 12 12:05:21.386: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 12:05:21.386: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 12:05:21.386: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:05:21.388: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 12:05:31.395: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 12:05:31.396: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 12:05:31.396: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 12:05:31.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999419s May 12 12:05:32.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976284326s May 12 12:05:33.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972578671s May 12 12:05:34.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.96827697s May 12 12:05:35.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963129634s May 12 12:05:36.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.956479817s May 12 12:05:37.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.950162337s May 12 12:05:38.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.945887706s May 12 12:05:39.464: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941626409s May 12 12:05:40.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.899316ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nkhtn May 12 12:05:41.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:05:41.698: INFO: stderr: "I0512 12:05:41.610907 3825 log.go:172] (0xc000138840) (0xc000766640) Create stream\nI0512 12:05:41.610988 3825 log.go:172] (0xc000138840) (0xc000766640) Stream added, broadcasting: 1\nI0512 12:05:41.614347 3825 log.go:172] (0xc000138840) Reply frame received for 1\nI0512 12:05:41.614397 3825 log.go:172] (0xc000138840) (0xc000698c80) Create stream\nI0512 12:05:41.614411 3825 log.go:172] (0xc000138840) (0xc000698c80) Stream added, broadcasting: 3\nI0512 12:05:41.615346 3825 log.go:172] (0xc000138840) Reply frame received for 3\nI0512 12:05:41.615389 3825 log.go:172] (0xc000138840) (0xc0007666e0) Create stream\nI0512 12:05:41.615398 3825 log.go:172] (0xc000138840) (0xc0007666e0) Stream added, broadcasting: 5\nI0512 12:05:41.616224 3825 log.go:172] (0xc000138840) Reply frame received for 5\nI0512 12:05:41.691863 3825 log.go:172] (0xc000138840) Data frame received for 5\nI0512 12:05:41.691910 3825 log.go:172] (0xc0007666e0) (5) Data frame handling\nI0512 12:05:41.691936 3825 log.go:172] (0xc000138840) Data frame received for 3\nI0512 12:05:41.691945 3825 log.go:172] (0xc000698c80) (3) Data frame handling\nI0512 12:05:41.691956 3825 log.go:172] (0xc000698c80) (3) Data frame sent\nI0512 12:05:41.691964 3825 log.go:172] (0xc000138840) Data frame received for 3\nI0512 12:05:41.691972 3825 log.go:172] (0xc000698c80) (3) Data frame handling\nI0512 12:05:41.693411 3825 log.go:172] (0xc000138840) Data frame received for 1\nI0512 12:05:41.693431 3825 log.go:172] (0xc000766640) (1) Data frame handling\nI0512 12:05:41.693444 3825 log.go:172] (0xc000766640) (1) Data frame sent\nI0512 12:05:41.693452 3825 log.go:172] (0xc000138840) (0xc000766640) Stream removed, broadcasting: 1\nI0512 12:05:41.693479 3825 log.go:172] (0xc000138840) Go away received\nI0512 12:05:41.693698 3825 log.go:172] (0xc000138840) (0xc000766640) Stream removed, broadcasting: 1\nI0512 12:05:41.693728 3825 log.go:172] (0xc000138840) (0xc000698c80) Stream removed, broadcasting: 3\nI0512 12:05:41.693747 3825 log.go:172] (0xc000138840) (0xc0007666e0) Stream removed, broadcasting: 5\n" May 12 12:05:41.698: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 12:05:41.698: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 12:05:41.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:05:41.895: INFO: stderr: "I0512 12:05:41.824864 3848 log.go:172] (0xc000138840) (0xc0006872c0) Create stream\nI0512 12:05:41.824912 3848 log.go:172] (0xc000138840) (0xc0006872c0) Stream added, broadcasting: 1\nI0512 12:05:41.826839 3848 log.go:172] (0xc000138840) Reply frame received for 1\nI0512 12:05:41.826872 3848 log.go:172] (0xc000138840) (0xc000750000) Create stream\nI0512 12:05:41.826885 3848 log.go:172] (0xc000138840) (0xc000750000) Stream added, broadcasting: 3\nI0512 12:05:41.827624 3848 log.go:172] (0xc000138840) Reply frame received for 3\nI0512 12:05:41.827647 3848 log.go:172] (0xc000138840) (0xc0006c8000) Create stream\nI0512 12:05:41.827657 3848 log.go:172] (0xc000138840) (0xc0006c8000) Stream added, broadcasting: 5\nI0512 12:05:41.828353 3848 log.go:172] (0xc000138840) Reply frame received for 5\nI0512 12:05:41.888580 3848 log.go:172] (0xc000138840) Data frame received for 3\nI0512 12:05:41.888607 3848 log.go:172] (0xc000750000) (3) Data frame handling\nI0512 12:05:41.888620 3848 log.go:172] (0xc000750000) (3) Data frame sent\nI0512 12:05:41.888943 3848 log.go:172] (0xc000138840) Data frame received for 3\nI0512 12:05:41.888959 3848 log.go:172] (0xc000750000) (3) Data frame handling\nI0512 12:05:41.888983 3848 log.go:172] (0xc000138840) Data frame received for 5\nI0512 12:05:41.888997 3848 log.go:172] (0xc0006c8000) (5) Data frame handling\nI0512 12:05:41.890312 3848 log.go:172] (0xc000138840) Data frame received for 1\nI0512 12:05:41.890338 3848 log.go:172] (0xc0006872c0) (1) Data frame handling\nI0512 12:05:41.890349 3848 log.go:172] (0xc0006872c0) (1) Data frame sent\nI0512 12:05:41.891054 3848 log.go:172] (0xc000138840) (0xc0006872c0) Stream removed, broadcasting: 1\nI0512 12:05:41.891089 3848 log.go:172] (0xc000138840) Go away received\nI0512 12:05:41.891197 3848 log.go:172] (0xc000138840) (0xc0006872c0) Stream removed, broadcasting: 1\nI0512 12:05:41.891215 3848 log.go:172] (0xc000138840) (0xc000750000) Stream removed, broadcasting: 3\nI0512 12:05:41.891223 3848 log.go:172] (0xc000138840) (0xc0006c8000) Stream removed, broadcasting: 5\n" May 12 12:05:41.895: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 12:05:41.895: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 12:05:41.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:05:42.208: INFO: rc: 1 May 12 12:05:42.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0512 12:05:42.191211 3871 log.go:172] (0xc00015c630) (0xc000025400) Create stream I0512 12:05:42.191285 3871 log.go:172] (0xc00015c630) (0xc000025400) Stream added, broadcasting: 1 I0512 12:05:42.194072 3871 log.go:172] (0xc00015c630) Reply frame received for 1 I0512 12:05:42.194106 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Create stream I0512 12:05:42.194114 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Stream added, broadcasting: 3 I0512 12:05:42.194776 3871 log.go:172] (0xc00015c630) Reply frame received for 3 I0512 12:05:42.194800 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Create stream I0512 12:05:42.194807 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Stream added, broadcasting: 5 I0512 12:05:42.195701 3871 log.go:172] (0xc00015c630) Reply frame received for 5 I0512 12:05:42.199396 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Stream removed, broadcasting: 3 I0512 12:05:42.199454 3871 log.go:172] (0xc00015c630) Data frame received for 1 I0512 12:05:42.199467 3871 log.go:172] (0xc000025400) (1) Data frame handling I0512 12:05:42.199477 3871 log.go:172] (0xc000025400) (1) Data frame sent I0512 12:05:42.199489 3871 log.go:172] (0xc00015c630) (0xc000025400) Stream removed, broadcasting: 1 I0512 12:05:42.199774 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Stream removed, broadcasting: 5 I0512 12:05:42.199799 3871 log.go:172] (0xc00015c630) (0xc000025400) Stream removed, broadcasting: 1 I0512 12:05:42.199809 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Stream removed, broadcasting: 3 I0512 12:05:42.199818 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "2c6f43de526636ecc3feb8b85415f16dc3d4cde829ab7a1cb9eb7e0711d88978": cannot exec in a stopped state: unknown [] 0xc00109c960 exit status 1 true [0xc0013257b0 0xc0013257d8 0xc0013257f0] [0xc0013257b0 0xc0013257d8 0xc0013257f0] [0xc0013257d0 0xc0013257e8] [0x935700 0x935700] 0xc0017f0900 }: Command stdout: stderr: I0512 12:05:42.191211 3871 log.go:172] (0xc00015c630) (0xc000025400) Create stream I0512 12:05:42.191285 3871 log.go:172] (0xc00015c630) (0xc000025400) Stream added, broadcasting: 1 I0512 12:05:42.194072 3871 log.go:172] (0xc00015c630) Reply frame received for 1 I0512 12:05:42.194106 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Create stream I0512 12:05:42.194114 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Stream added, broadcasting: 3 I0512 12:05:42.194776 3871 log.go:172] (0xc00015c630) Reply frame received for 3 I0512 12:05:42.194800 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Create stream I0512 12:05:42.194807 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Stream added, broadcasting: 5 I0512 12:05:42.195701 3871 log.go:172] (0xc00015c630) Reply frame received for 5 I0512 12:05:42.199396 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Stream removed, broadcasting: 3 I0512 12:05:42.199454 3871 log.go:172] (0xc00015c630) Data frame received for 1 I0512 12:05:42.199467 3871 log.go:172] (0xc000025400) (1) Data frame handling I0512 12:05:42.199477 3871 log.go:172] (0xc000025400) (1) Data frame sent I0512 12:05:42.199489 3871 log.go:172] (0xc00015c630) (0xc000025400) Stream removed, broadcasting: 1 I0512 12:05:42.199774 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Stream removed, broadcasting: 5 I0512 12:05:42.199799 3871 log.go:172] (0xc00015c630) (0xc000025400) Stream removed, broadcasting: 1 I0512 12:05:42.199809 3871 log.go:172] (0xc00015c630) (0xc0002a0000) Stream removed, broadcasting: 3 I0512 12:05:42.199818 3871 log.go:172] (0xc00015c630) (0xc0000254a0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "2c6f43de526636ecc3feb8b85415f16dc3d4cde829ab7a1cb9eb7e0711d88978": cannot exec in a stopped state: unknown error: exit status 1 May 12 12:05:52.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:05:52.654: INFO: rc: 1 May 12 12:05:52.654: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00109cae0 exit status 1 true [0xc001325808 0xc001325860 0xc001325878] [0xc001325808 0xc001325860 0xc001325878] [0xc001325848 0xc001325870] [0x935700 0x935700] 0xc0017f18c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:06:02.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:06:02.948: INFO: rc: 1 May 12 12:06:02.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a120 exit status 1 true [0xc00158a000 0xc00158a030 0xc00158a048] [0xc00158a000 0xc00158a030 0xc00158a048] [0xc00158a028 0xc00158a040] [0x935700 0x935700] 0xc0017f09c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:06:12.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:06:13.041: INFO: rc: 1 May 12 12:06:13.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a270 exit status 1 true [0xc00158a050 0xc00158a090 0xc00158a0b8] [0xc00158a050 0xc00158a090 0xc00158a0b8] [0xc00158a088 0xc00158a0b0] [0x935700 0x935700] 0xc001492540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:06:23.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:06:23.183: INFO: rc: 1 May 12 12:06:23.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a3c0 exit status 1 true [0xc00158a0c0 0xc00158a0d8 0xc00158a0f0] [0xc00158a0c0 0xc00158a0d8 0xc00158a0f0] [0xc00158a0d0 0xc00158a0e8] [0x935700 0x935700] 0xc001afa1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:06:33.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:06:33.494: INFO: rc: 1 May 12 12:06:33.494: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c98210 exit status 1 true [0xc00000ec70 0xc00000f0b8 0xc00000f3c0] [0xc00000ec70 0xc00000f0b8 0xc00000f3c0] [0xc00000eed8 0xc00000f1c0] [0x935700 0x935700] 0xc000954420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:06:43.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:06:43.587: INFO: rc: 1 May 12 12:06:43.587: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a2270 exit status 1 true [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6010 0xc0015c6028] [0x935700 0x935700] 0xc000a84300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:06:53.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:06:53.719: INFO: rc: 1 May 12 12:06:53.720: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a2390 exit status 1 true [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6048 0xc0015c6060] [0x935700 0x935700] 0xc000a84840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:07:03.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:07:03.813: INFO: rc: 1 May 12 12:07:03.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a540 exit status 1 true [0xc00158a0f8 0xc00158a110 0xc00158a128] [0xc00158a0f8 0xc00158a110 0xc00158a128] [0xc00158a108 0xc00158a120] [0x935700 0x935700] 0xc001afb140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:07:13.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:07:13.909: INFO: rc: 1 May 12 12:07:13.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c98360 exit status 1 true [0xc00000f430 0xc00000f610 0xc00000f6e0] [0xc00000f430 0xc00000f610 0xc00000f6e0] [0xc00000f570 0xc00000f6b8] [0x935700 0x935700] 0xc000954960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:07:23.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:07:24.002: INFO: rc: 1 May 12 12:07:24.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a690 exit status 1 true [0xc00158a130 0xc00158a148 0xc00158a160] [0xc00158a130 0xc00158a148 0xc00158a160] [0xc00158a140 0xc00158a158] [0x935700 0x935700] 0xc001afbf80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:07:34.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:07:34.453: INFO: rc: 1 May 12 12:07:34.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a24b0 exit status 1 true [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6080 0xc0015c6098] [0x935700 0x935700] 0xc000a84ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:07:44.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:07:44.543: INFO: rc: 1 May 12 12:07:44.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a25d0 exit status 1 true [0xc0015c60a8 0xc0015c60c0 0xc0015c60d8] [0xc0015c60a8 0xc0015c60c0 0xc0015c60d8] [0xc0015c60b8 0xc0015c60d0] [0x935700 0x935700] 0xc000a85560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:07:54.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:07:54.629: INFO: rc: 1 May 12 12:07:54.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a26f0 exit status 1 true [0xc0015c60e0 0xc0015c60f8 0xc0015c6110] [0xc0015c60e0 0xc0015c60f8 0xc0015c6110] [0xc0015c60f0 0xc0015c6108] [0x935700 0x935700] 0xc000a85bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:08:04.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:08:04.727: INFO: rc: 1 May 12 12:08:04.727: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0003fd4d0 exit status 1 true [0xc001612008 0xc001612020 0xc001612038] [0xc001612008 0xc001612020 0xc001612038] [0xc001612018 0xc001612030] [0x935700 0x935700] 0xc001750540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:08:14.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:08:14.834: INFO: rc: 1 May 12 12:08:14.834: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a2240 exit status 1 true [0xc00000e1b8 0xc00000eed8 0xc00000f1c0] [0xc00000e1b8 0xc00000eed8 0xc00000f1c0] [0xc00000ee80 0xc00000f0c0] [0x935700 0x935700] 0xc001afa2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:08:24.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:08:24.907: INFO: rc: 1 May 12 12:08:24.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c98270 exit status 1 true [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6010 0xc0015c6028] [0x935700 0x935700] 0xc0017f06c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:08:34.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:08:34.992: INFO: rc: 1 May 12 12:08:34.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c98450 exit status 1 true [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6038 0xc0015c6050 0xc0015c6068] [0xc0015c6048 0xc0015c6060] [0x935700 0x935700] 0xc0017f1800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:08:44.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:08:45.316: INFO: rc: 1 May 12 12:08:45.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a150 exit status 1 true [0xc001612040 0xc001612058 0xc001612070] [0xc001612040 0xc001612058 0xc001612070] [0xc001612050 0xc001612068] [0x935700 0x935700] 0xc000954240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:08:55.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:08:55.681: INFO: rc: 1 May 12 12:08:55.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a23f0 exit status 1 true [0xc00000f3c0 0xc00000f570 0xc00000f6b8] [0xc00000f3c0 0xc00000f570 0xc00000f6b8] [0xc00000f478 0xc00000f6a0] [0x935700 0x935700] 0xc001afb2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:09:05.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:09:05.782: INFO: rc: 1 May 12 12:09:05.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a2540 exit status 1 true [0xc00000f6e0 0xc00000f850 0xc00000f978] [0xc00000f6e0 0xc00000f850 0xc00000f978] [0xc00000f790 0xc00000f8d8] [0x935700 0x935700] 0xc000a84060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:09:15.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:09:15.868: INFO: rc: 1 May 12 12:09:15.868: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a2d0 exit status 1 true [0xc001612078 0xc001612090 0xc0016120a8] [0xc001612078 0xc001612090 0xc0016120a8] [0xc001612088 0xc0016120a0] [0x935700 0x935700] 0xc0009547e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:09:25.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:09:26.191: INFO: rc: 1 May 12 12:09:26.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a26c0 exit status 1 true [0xc00000f9b8 0xc00000fb28 0xc00000fc58] [0xc00000f9b8 0xc00000fb28 0xc00000fc58] [0xc00000fab8 0xc00000fba0] [0x935700 0x935700] 0xc000a84420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:09:36.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:09:36.287: INFO: rc: 1 May 12 12:09:36.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a450 exit status 1 true [0xc0016120b0 0xc0016120c8 0xc0016120e0] [0xc0016120b0 0xc0016120c8 0xc0016120e0] [0xc0016120c0 0xc0016120d8] [0x935700 0x935700] 0xc000954c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:09:46.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:09:46.380: INFO: rc: 1 May 12 12:09:46.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a5a0 exit status 1 true [0xc0016120e8 0xc001612100 0xc001612118] [0xc0016120e8 0xc001612100 0xc001612118] [0xc0016120f8 0xc001612110] [0x935700 0x935700] 0xc000954f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:09:56.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:09:56.463: INFO: rc: 1 May 12 12:09:56.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c98570 exit status 1 true [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6070 0xc0015c6088 0xc0015c60a0] [0xc0015c6080 0xc0015c6098] [0x935700 0x935700] 0xc001750840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:10:06.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:10:06.561: INFO: rc: 1 May 12 12:10:06.561: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a120 exit status 1 true [0xc001612008 0xc001612020 0xc001612038] [0xc001612008 0xc001612020 0xc001612038] [0xc001612018 0xc001612030] [0x935700 0x935700] 0xc0017f09c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:10:16.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:10:16.659: INFO: rc: 1 May 12 12:10:16.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016a22a0 exit status 1 true [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6000 0xc0015c6018 0xc0015c6030] [0xc0015c6010 0xc0015c6028] [0x935700 0x935700] 0xc001afa0c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:10:26.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:10:26.746: INFO: rc: 1 May 12 12:10:26.746: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00252a270 exit status 1 true [0xc001612040 0xc001612058 0xc001612070] [0xc001612040 0xc001612058 0xc001612070] [0xc001612050 0xc001612068] [0x935700 0x935700] 0xc000954060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:10:36.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:10:36.827: INFO: rc: 1 May 12 12:10:36.827: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c98210 exit status 1 true [0xc00000e1b8 0xc00000eed8 0xc00000f1c0] [0xc00000e1b8 0xc00000eed8 0xc00000f1c0] [0xc00000ee80 0xc00000f0c0] [0x935700 0x935700] 0xc001750540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 12 12:10:46.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkhtn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 12:10:46.927: INFO: rc: 1 May 12 12:10:46.927: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: May 12 12:10:46.927: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 12:10:46.939: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nkhtn May 12 12:10:46.941: INFO: Scaling statefulset ss to 0 May 12 12:10:46.951: INFO: Waiting for statefulset status.replicas updated to 0 May 12 12:10:46.954: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 12:10:46.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nkhtn" for this suite. May 12 12:10:55.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:10:55.023: INFO: namespace: e2e-tests-statefulset-nkhtn, resource: bindings, ignored listing per whitelist May 12 12:10:55.075: INFO: namespace e2e-tests-statefulset-nkhtn deletion completed in 8.095613898s • [SLOW TEST:387.048 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSMay 12 12:10:55.075: INFO: Running AfterSuite actions on all nodes May 12 12:10:55.075: INFO: Running AfterSuite actions on node 1 May 12 12:10:55.075: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 8140.413 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS