I0522 10:46:54.828222 6 e2e.go:224] Starting e2e run "8d0c7d81-9c19-11ea-8e9c-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590144414 - Will randomize all specs Will run 201 of 2164 specs May 22 10:46:55.000: INFO: >>> kubeConfig: /root/.kube/config May 22 10:46:55.003: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 22 10:46:55.016: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 10:46:55.044: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 10:46:55.044: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 22 10:46:55.044: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 22 10:46:55.052: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 22 10:46:55.052: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 22 10:46:55.052: INFO: e2e test version: v1.13.12 May 22 10:46:55.053: INFO: kube-apiserver version: v1.13.12 SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:46:55.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces May 22 10:46:55.180: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 22 10:46:59.587: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:47:23.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-bl4gx" for this suite. May 22 10:47:29.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:47:29.717: INFO: namespace: e2e-tests-namespaces-bl4gx, resource: bindings, ignored listing per whitelist May 22 10:47:29.752: INFO: namespace e2e-tests-namespaces-bl4gx deletion completed in 6.094264302s STEP: Destroying namespace "e2e-tests-nsdeletetest-nc5br" for this suite. May 22 10:47:29.755: INFO: Namespace e2e-tests-nsdeletetest-nc5br was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-sz696" for this suite. May 22 10:47:35.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:47:35.815: INFO: namespace: e2e-tests-nsdeletetest-sz696, resource: bindings, ignored listing per whitelist May 22 10:47:35.841: INFO: namespace e2e-tests-nsdeletetest-sz696 deletion completed in 6.085610435s • [SLOW TEST:40.788 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:47:35.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 22 10:47:35.940: INFO: Waiting up to 5m0s for pod "pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-ghvvh" to be "success or failure" May 22 10:47:35.944: INFO: Pod "pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.705752ms May 22 10:47:37.962: INFO: Pod "pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022051712s May 22 10:47:39.965: INFO: Pod "pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025406342s STEP: Saw pod success May 22 10:47:39.965: INFO: Pod "pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:47:39.968: INFO: Trying to get logs from node hunter-worker pod pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 10:47:40.023: INFO: Waiting for pod pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018 to disappear May 22 10:47:40.034: INFO: Pod pod-a5e0a2c0-9c19-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:47:40.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ghvvh" for this suite. May 22 10:47:46.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:47:46.060: INFO: namespace: e2e-tests-emptydir-ghvvh, resource: bindings, ignored listing per whitelist May 22 10:47:46.127: INFO: namespace e2e-tests-emptydir-ghvvh deletion completed in 6.089612074s • [SLOW TEST:10.286 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:47:46.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 10:47:46.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-54brr" to be "success or failure" May 22 10:47:46.283: INFO: Pod "downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.480695ms May 22 10:47:48.331: INFO: Pod "downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104256414s May 22 10:47:50.335: INFO: Pod "downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108538085s STEP: Saw pod success May 22 10:47:50.335: INFO: Pod "downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:47:50.338: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 10:47:50.359: INFO: Waiting for pod downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018 to disappear May 22 10:47:50.540: INFO: Pod downwardapi-volume-ac031164-9c19-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:47:50.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-54brr" for this suite. May 22 10:47:56.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:47:56.719: INFO: namespace: e2e-tests-downward-api-54brr, resource: bindings, ignored listing per whitelist May 22 10:47:56.750: INFO: namespace e2e-tests-downward-api-54brr deletion completed in 6.20415976s • [SLOW TEST:10.623 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:47:56.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vw5vw STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 10:47:56.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 10:48:23.043: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.132:8080/dial?request=hostName&protocol=http&host=10.244.2.156&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-vw5vw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 10:48:23.043: INFO: >>> kubeConfig: /root/.kube/config I0522 10:48:23.074742 6 log.go:172] (0xc001af0420) (0xc001a57680) Create stream I0522 10:48:23.074778 6 log.go:172] (0xc001af0420) (0xc001a57680) Stream added, broadcasting: 1 I0522 10:48:23.077307 6 log.go:172] (0xc001af0420) Reply frame received for 1 I0522 10:48:23.077378 6 log.go:172] (0xc001af0420) (0xc0019035e0) Create stream I0522 10:48:23.077445 6 log.go:172] (0xc001af0420) (0xc0019035e0) Stream added, broadcasting: 3 I0522 10:48:23.078516 6 log.go:172] (0xc001af0420) Reply frame received for 3 I0522 10:48:23.078551 6 log.go:172] (0xc001af0420) (0xc001a57720) Create stream I0522 10:48:23.078562 6 log.go:172] (0xc001af0420) (0xc001a57720) Stream added, broadcasting: 5 I0522 10:48:23.079718 6 log.go:172] (0xc001af0420) Reply frame received for 5 I0522 10:48:23.236334 6 log.go:172] (0xc001af0420) Data frame received for 3 I0522 10:48:23.236378 6 log.go:172] (0xc0019035e0) (3) Data frame handling I0522 10:48:23.236397 6 log.go:172] (0xc0019035e0) (3) Data frame sent I0522 10:48:23.237003 6 log.go:172] (0xc001af0420) Data frame received for 3 I0522 10:48:23.237021 6 log.go:172] (0xc0019035e0) (3) Data frame handling I0522 10:48:23.237054 6 log.go:172] (0xc001af0420) Data frame received for 5 I0522 10:48:23.237076 6 log.go:172] (0xc001a57720) (5) Data frame handling I0522 10:48:23.239086 6 log.go:172] (0xc001af0420) Data frame received for 1 I0522 10:48:23.239106 6 log.go:172] (0xc001a57680) (1) Data frame handling I0522 10:48:23.239116 6 log.go:172] (0xc001a57680) (1) Data frame sent I0522 10:48:23.239135 6 log.go:172] (0xc001af0420) (0xc001a57680) Stream removed, broadcasting: 1 I0522 10:48:23.239150 6 log.go:172] (0xc001af0420) Go away received I0522 10:48:23.239548 6 log.go:172] (0xc001af0420) (0xc001a57680) Stream removed, broadcasting: 1 I0522 10:48:23.239567 6 log.go:172] (0xc001af0420) (0xc0019035e0) Stream removed, broadcasting: 3 I0522 10:48:23.239578 6 log.go:172] (0xc001af0420) (0xc001a57720) Stream removed, broadcasting: 5 May 22 10:48:23.239: INFO: Waiting for endpoints: map[] May 22 10:48:23.243: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.132:8080/dial?request=hostName&protocol=http&host=10.244.1.131&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-vw5vw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 10:48:23.243: INFO: >>> kubeConfig: /root/.kube/config I0522 10:48:23.272723 6 log.go:172] (0xc000e782c0) (0xc001090280) Create stream I0522 10:48:23.272748 6 log.go:172] (0xc000e782c0) (0xc001090280) Stream added, broadcasting: 1 I0522 10:48:23.275205 6 log.go:172] (0xc000e782c0) Reply frame received for 1 I0522 10:48:23.275253 6 log.go:172] (0xc000e782c0) (0xc001a57900) Create stream I0522 10:48:23.275270 6 log.go:172] (0xc000e782c0) (0xc001a57900) Stream added, broadcasting: 3 I0522 10:48:23.276219 6 log.go:172] (0xc000e782c0) Reply frame received for 3 I0522 10:48:23.276243 6 log.go:172] (0xc000e782c0) (0xc001a579a0) Create stream I0522 10:48:23.276252 6 log.go:172] (0xc000e782c0) (0xc001a579a0) Stream added, broadcasting: 5 I0522 10:48:23.277423 6 log.go:172] (0xc000e782c0) Reply frame received for 5 I0522 10:48:23.362825 6 log.go:172] (0xc000e782c0) Data frame received for 3 I0522 10:48:23.362852 6 log.go:172] (0xc001a57900) (3) Data frame handling I0522 10:48:23.362869 6 log.go:172] (0xc001a57900) (3) Data frame sent I0522 10:48:23.363955 6 log.go:172] (0xc000e782c0) Data frame received for 3 I0522 10:48:23.363978 6 log.go:172] (0xc001a57900) (3) Data frame handling I0522 10:48:23.364000 6 log.go:172] (0xc000e782c0) Data frame received for 5 I0522 10:48:23.364009 6 log.go:172] (0xc001a579a0) (5) Data frame handling I0522 10:48:23.366433 6 log.go:172] (0xc000e782c0) Data frame received for 1 I0522 10:48:23.366486 6 log.go:172] (0xc001090280) (1) Data frame handling I0522 10:48:23.366517 6 log.go:172] (0xc001090280) (1) Data frame sent I0522 10:48:23.366826 6 log.go:172] (0xc000e782c0) (0xc001090280) Stream removed, broadcasting: 1 I0522 10:48:23.366846 6 log.go:172] (0xc000e782c0) Go away received I0522 10:48:23.366968 6 log.go:172] (0xc000e782c0) (0xc001090280) Stream removed, broadcasting: 1 I0522 10:48:23.366993 6 log.go:172] (0xc000e782c0) (0xc001a57900) Stream removed, broadcasting: 3 I0522 10:48:23.367006 6 log.go:172] (0xc000e782c0) (0xc001a579a0) Stream removed, broadcasting: 5 May 22 10:48:23.367: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:48:23.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-vw5vw" for this suite. May 22 10:48:45.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:48:45.466: INFO: namespace: e2e-tests-pod-network-test-vw5vw, resource: bindings, ignored listing per whitelist May 22 10:48:45.492: INFO: namespace e2e-tests-pod-network-test-vw5vw deletion completed in 22.087806037s • [SLOW TEST:48.742 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:48:45.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-cf64a4a1-9c19-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 10:48:45.613: INFO: Waiting up to 5m0s for pod "pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-fqw99" to be "success or failure" May 22 10:48:45.642: INFO: Pod "pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.331236ms May 22 10:48:47.646: INFO: Pod "pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033091765s May 22 10:48:49.651: INFO: Pod "pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037512768s STEP: Saw pod success May 22 10:48:49.651: INFO: Pod "pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:48:49.655: INFO: Trying to get logs from node hunter-worker pod pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 10:48:49.716: INFO: Waiting for pod pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018 to disappear May 22 10:48:49.736: INFO: Pod pod-secrets-cf66406e-9c19-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:48:49.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fqw99" for this suite. May 22 10:48:55.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:48:55.820: INFO: namespace: e2e-tests-secrets-fqw99, resource: bindings, ignored listing per whitelist May 22 10:48:55.863: INFO: namespace e2e-tests-secrets-fqw99 deletion completed in 6.122397893s • [SLOW TEST:10.371 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:48:55.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 10:49:00.024: INFO: Waiting up to 5m0s for pod "client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018" in namespace "e2e-tests-pods-swzd5" to be "success or failure" May 22 10:49:00.035: INFO: Pod "client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.827502ms May 22 10:49:02.040: INFO: Pod "client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015963994s May 22 10:49:04.043: INFO: Pod "client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019589386s STEP: Saw pod success May 22 10:49:04.043: INFO: Pod "client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:49:04.045: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018 container env3cont: STEP: delete the pod May 22 10:49:04.086: INFO: Waiting for pod client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018 to disappear May 22 10:49:04.095: INFO: Pod client-envvars-d7fee01f-9c19-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:49:04.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-swzd5" for this suite. May 22 10:49:42.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:49:42.162: INFO: namespace: e2e-tests-pods-swzd5, resource: bindings, ignored listing per whitelist May 22 10:49:42.185: INFO: namespace e2e-tests-pods-swzd5 deletion completed in 38.086449227s • [SLOW TEST:46.322 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:49:42.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f12b5879-9c19-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 10:49:42.305: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-f46pt" to be "success or failure" May 22 10:49:42.312: INFO: Pod "pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315984ms May 22 10:49:44.316: INFO: Pod "pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010159165s May 22 10:49:46.319: INFO: Pod "pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013633371s STEP: Saw pod success May 22 10:49:46.319: INFO: Pod "pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:49:46.322: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 10:49:46.349: INFO: Waiting for pod pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018 to disappear May 22 10:49:46.447: INFO: Pod pod-projected-configmaps-f12bd956-9c19-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:49:46.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f46pt" for this suite. May 22 10:49:52.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:49:52.492: INFO: namespace: e2e-tests-projected-f46pt, resource: bindings, ignored listing per whitelist May 22 10:49:52.561: INFO: namespace e2e-tests-projected-f46pt deletion completed in 6.109873119s • [SLOW TEST:10.375 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:49:52.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 10:49:52.654: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 22 10:49:57.658: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 22 10:49:57.658: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 22 10:49:59.663: INFO: Creating deployment "test-rollover-deployment" May 22 10:49:59.672: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 22 10:50:01.679: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 22 10:50:01.685: INFO: Ensure that both replica sets have 1 created replica May 22 10:50:01.689: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 22 10:50:01.695: INFO: Updating deployment test-rollover-deployment May 22 10:50:01.695: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 22 10:50:03.706: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 22 10:50:03.710: INFO: Make sure deployment "test-rollover-deployment" is complete May 22 10:50:03.715: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:03.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741401, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:05.723: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:05.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741401, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:07.722: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:07.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741406, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:09.723: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:09.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741406, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:11.723: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:11.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741406, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:13.721: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:13.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741406, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:15.722: INFO: all replica sets need to contain the pod-template-hash label May 22 10:50:15.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741406, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725741399, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 10:50:17.722: INFO: May 22 10:50:17.722: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 22 10:50:17.729: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-7c484,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c484/deployments/test-rollover-deployment,UID:fb8cc9b8-9c19-11ea-99e8-0242ac110002,ResourceVersion:11907694,Generation:2,CreationTimestamp:2020-05-22 10:49:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-22 10:49:59 +0000 UTC 2020-05-22 10:49:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-22 10:50:16 +0000 UTC 2020-05-22 10:49:59 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 22 10:50:17.733: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-7c484,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c484/replicasets/test-rollover-deployment-5b8479fdb6,UID:fcc2eb9a-9c19-11ea-99e8-0242ac110002,ResourceVersion:11907685,Generation:2,CreationTimestamp:2020-05-22 10:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fb8cc9b8-9c19-11ea-99e8-0242ac110002 0xc00100d7a7 0xc00100d7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 22 10:50:17.733: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 22 10:50:17.733: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-7c484,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c484/replicasets/test-rollover-controller,UID:f75ad98f-9c19-11ea-99e8-0242ac110002,ResourceVersion:11907693,Generation:2,CreationTimestamp:2020-05-22 10:49:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fb8cc9b8-9c19-11ea-99e8-0242ac110002 0xc00100d5ff 0xc00100d610}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 10:50:17.733: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-7c484,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7c484/replicasets/test-rollover-deployment-58494b7559,UID:fb8f4e34-9c19-11ea-99e8-0242ac110002,ResourceVersion:11907648,Generation:2,CreationTimestamp:2020-05-22 10:49:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fb8cc9b8-9c19-11ea-99e8-0242ac110002 0xc00100d6d7 0xc00100d6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 10:50:17.736: INFO: Pod "test-rollover-deployment-5b8479fdb6-44h98" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-44h98,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-7c484,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7c484/pods/test-rollover-deployment-5b8479fdb6-44h98,UID:fcd176de-9c19-11ea-99e8-0242ac110002,ResourceVersion:11907663,Generation:0,CreationTimestamp:2020-05-22 10:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 fcc2eb9a-9c19-11ea-99e8-0242ac110002 0xc001a09867 0xc001a09868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-drgxw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-drgxw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-drgxw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a098e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a09900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:50:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:50:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:50:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:50:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.159,StartTime:2020-05-22 10:50:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-22 10:50:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://0304b98bdddb0023576fe0fc72623c8c527f7d07e63bde3a702b980cbd6d7318}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:50:17.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7c484" for this suite. May 22 10:50:25.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:50:25.796: INFO: namespace: e2e-tests-deployment-7c484, resource: bindings, ignored listing per whitelist May 22 10:50:25.827: INFO: namespace e2e-tests-deployment-7c484 deletion completed in 8.086482377s • [SLOW TEST:33.265 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:50:25.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 10:50:25.942: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.254967ms) May 22 10:50:25.944: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.334404ms) May 22 10:50:25.947: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.490453ms) May 22 10:50:25.950: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.138699ms) May 22 10:50:25.953: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.579064ms) May 22 10:50:25.955: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.701005ms) May 22 10:50:25.958: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.642037ms) May 22 10:50:25.960: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.127711ms) May 22 10:50:25.963: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.700877ms) May 22 10:50:25.965: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.576145ms) May 22 10:50:25.968: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.422686ms) May 22 10:50:25.971: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.23545ms) May 22 10:50:25.974: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.625679ms) May 22 10:50:25.976: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.554921ms) May 22 10:50:25.979: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.622602ms) May 22 10:50:25.982: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.200696ms) May 22 10:50:25.985: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.202212ms) May 22 10:50:25.988: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.793176ms) May 22 10:50:25.991: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.10456ms) May 22 10:50:25.995: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.433697ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:50:25.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-8hfhw" for this suite. May 22 10:50:32.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:50:32.035: INFO: namespace: e2e-tests-proxy-8hfhw, resource: bindings, ignored listing per whitelist May 22 10:50:32.083: INFO: namespace e2e-tests-proxy-8hfhw deletion completed in 6.084450364s • [SLOW TEST:6.256 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:50:32.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 22 10:50:37.027: INFO: Successfully updated pod "labelsupdate0f1acf37-9c1a-11ea-8e9c-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:50:39.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5hr2d" for this suite. May 22 10:51:01.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:51:01.219: INFO: namespace: e2e-tests-downward-api-5hr2d, resource: bindings, ignored listing per whitelist May 22 10:51:01.236: INFO: namespace e2e-tests-downward-api-5hr2d deletion completed in 22.087486449s • [SLOW TEST:29.152 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:51:01.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-204fba22-9c1a-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 10:51:01.364: INFO: Waiting up to 5m0s for pod "pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-pfljj" to be "success or failure" May 22 10:51:01.416: INFO: Pod "pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 51.776587ms May 22 10:51:03.421: INFO: Pod "pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056886069s May 22 10:51:05.426: INFO: Pod "pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061507892s STEP: Saw pod success May 22 10:51:05.426: INFO: Pod "pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:51:05.430: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 10:51:05.718: INFO: Waiting for pod pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018 to disappear May 22 10:51:05.733: INFO: Pod pod-configmaps-205242df-9c1a-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:51:05.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pfljj" for this suite. May 22 10:51:11.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:51:11.778: INFO: namespace: e2e-tests-configmap-pfljj, resource: bindings, ignored listing per whitelist May 22 10:51:11.861: INFO: namespace e2e-tests-configmap-pfljj deletion completed in 6.123199877s • [SLOW TEST:10.625 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:51:11.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-ft6zj [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-ft6zj STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-ft6zj May 22 10:51:12.015: INFO: Found 0 stateful pods, waiting for 1 May 22 10:51:22.019: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 22 10:51:22.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 10:51:22.281: INFO: stderr: "I0522 10:51:22.148844 40 log.go:172] (0xc00015c6e0) (0xc00073a640) Create stream\nI0522 10:51:22.148917 40 log.go:172] (0xc00015c6e0) (0xc00073a640) Stream added, broadcasting: 1\nI0522 10:51:22.151438 40 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0522 10:51:22.151486 40 log.go:172] (0xc00015c6e0) (0xc0006c2f00) Create stream\nI0522 10:51:22.151510 40 log.go:172] (0xc00015c6e0) (0xc0006c2f00) Stream added, broadcasting: 3\nI0522 10:51:22.152269 40 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0522 10:51:22.152295 40 log.go:172] (0xc00015c6e0) (0xc00073a6e0) Create stream\nI0522 10:51:22.152302 40 log.go:172] (0xc00015c6e0) (0xc00073a6e0) Stream added, broadcasting: 5\nI0522 10:51:22.153002 40 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0522 10:51:22.274316 40 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0522 10:51:22.274377 40 log.go:172] (0xc00073a6e0) (5) Data frame handling\nI0522 10:51:22.274414 40 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0522 10:51:22.274439 40 log.go:172] (0xc0006c2f00) (3) Data frame handling\nI0522 10:51:22.274451 40 log.go:172] (0xc0006c2f00) (3) Data frame sent\nI0522 10:51:22.274471 40 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0522 10:51:22.274499 40 log.go:172] (0xc0006c2f00) (3) Data frame handling\nI0522 10:51:22.276260 40 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0522 10:51:22.276317 40 log.go:172] (0xc00073a640) (1) Data frame handling\nI0522 10:51:22.276340 40 log.go:172] (0xc00073a640) (1) Data frame sent\nI0522 10:51:22.276352 40 log.go:172] (0xc00015c6e0) (0xc00073a640) Stream removed, broadcasting: 1\nI0522 10:51:22.276373 40 log.go:172] (0xc00015c6e0) Go away received\nI0522 10:51:22.276542 40 log.go:172] (0xc00015c6e0) (0xc00073a640) Stream removed, broadcasting: 1\nI0522 10:51:22.276565 40 log.go:172] (0xc00015c6e0) (0xc0006c2f00) Stream removed, broadcasting: 3\nI0522 10:51:22.276573 40 log.go:172] (0xc00015c6e0) (0xc00073a6e0) Stream removed, broadcasting: 5\n" May 22 10:51:22.281: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 10:51:22.281: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 10:51:22.286: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 22 10:51:32.290: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 10:51:32.290: INFO: Waiting for statefulset status.replicas updated to 0 May 22 10:51:32.303: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:51:32.304: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC }] May 22 10:51:32.304: INFO: May 22 10:51:32.304: INFO: StatefulSet ss has not reached scale 3, at 1 May 22 10:51:33.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996002394s May 22 10:51:34.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989535552s May 22 10:51:35.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.635649394s May 22 10:51:36.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.631052545s May 22 10:51:37.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.625552764s May 22 10:51:38.685: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.619795009s May 22 10:51:39.691: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.614620734s May 22 10:51:40.696: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.608864685s May 22 10:51:41.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 603.618827ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-ft6zj May 22 10:51:42.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 10:51:42.945: INFO: stderr: "I0522 10:51:42.854087 62 log.go:172] (0xc0001546e0) (0xc0007bd4a0) Create stream\nI0522 10:51:42.854164 62 log.go:172] (0xc0001546e0) (0xc0007bd4a0) Stream added, broadcasting: 1\nI0522 10:51:42.856328 62 log.go:172] (0xc0001546e0) Reply frame received for 1\nI0522 10:51:42.856384 62 log.go:172] (0xc0001546e0) (0xc000726000) Create stream\nI0522 10:51:42.856402 62 log.go:172] (0xc0001546e0) (0xc000726000) Stream added, broadcasting: 3\nI0522 10:51:42.857542 62 log.go:172] (0xc0001546e0) Reply frame received for 3\nI0522 10:51:42.857584 62 log.go:172] (0xc0001546e0) (0xc0005dc000) Create stream\nI0522 10:51:42.857598 62 log.go:172] (0xc0001546e0) (0xc0005dc000) Stream added, broadcasting: 5\nI0522 10:51:42.858308 62 log.go:172] (0xc0001546e0) Reply frame received for 5\nI0522 10:51:42.937699 62 log.go:172] (0xc0001546e0) Data frame received for 3\nI0522 10:51:42.937740 62 log.go:172] (0xc000726000) (3) Data frame handling\nI0522 10:51:42.937749 62 log.go:172] (0xc000726000) (3) Data frame sent\nI0522 10:51:42.937755 62 log.go:172] (0xc0001546e0) Data frame received for 3\nI0522 10:51:42.937760 62 log.go:172] (0xc000726000) (3) Data frame handling\nI0522 10:51:42.937785 62 log.go:172] (0xc0001546e0) Data frame received for 5\nI0522 10:51:42.937791 62 log.go:172] (0xc0005dc000) (5) Data frame handling\nI0522 10:51:42.939352 62 log.go:172] (0xc0001546e0) Data frame received for 1\nI0522 10:51:42.939371 62 log.go:172] (0xc0007bd4a0) (1) Data frame handling\nI0522 10:51:42.939382 62 log.go:172] (0xc0007bd4a0) (1) Data frame sent\nI0522 10:51:42.939394 62 log.go:172] (0xc0001546e0) (0xc0007bd4a0) Stream removed, broadcasting: 1\nI0522 10:51:42.939482 62 log.go:172] (0xc0001546e0) Go away received\nI0522 10:51:42.939584 62 log.go:172] (0xc0001546e0) (0xc0007bd4a0) Stream removed, broadcasting: 1\nI0522 10:51:42.939598 62 log.go:172] (0xc0001546e0) (0xc000726000) Stream removed, broadcasting: 3\nI0522 10:51:42.939612 62 log.go:172] (0xc0001546e0) (0xc0005dc000) Stream removed, broadcasting: 5\n" May 22 10:51:42.945: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 10:51:42.945: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 10:51:42.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 10:51:43.168: INFO: stderr: "I0522 10:51:43.070649 85 log.go:172] (0xc000154790) (0xc00072a780) Create stream\nI0522 10:51:43.070718 85 log.go:172] (0xc000154790) (0xc00072a780) Stream added, broadcasting: 1\nI0522 10:51:43.073690 85 log.go:172] (0xc000154790) Reply frame received for 1\nI0522 10:51:43.073748 85 log.go:172] (0xc000154790) (0xc0004cc5a0) Create stream\nI0522 10:51:43.073763 85 log.go:172] (0xc000154790) (0xc0004cc5a0) Stream added, broadcasting: 3\nI0522 10:51:43.074696 85 log.go:172] (0xc000154790) Reply frame received for 3\nI0522 10:51:43.074730 85 log.go:172] (0xc000154790) (0xc00072a820) Create stream\nI0522 10:51:43.074738 85 log.go:172] (0xc000154790) (0xc00072a820) Stream added, broadcasting: 5\nI0522 10:51:43.075575 85 log.go:172] (0xc000154790) Reply frame received for 5\nI0522 10:51:43.160103 85 log.go:172] (0xc000154790) Data frame received for 5\nI0522 10:51:43.160152 85 log.go:172] (0xc00072a820) (5) Data frame handling\nI0522 10:51:43.160171 85 log.go:172] (0xc00072a820) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0522 10:51:43.160186 85 log.go:172] (0xc000154790) Data frame received for 5\nI0522 10:51:43.160244 85 log.go:172] (0xc00072a820) (5) Data frame handling\nI0522 10:51:43.160290 85 log.go:172] (0xc000154790) Data frame received for 3\nI0522 10:51:43.160310 85 log.go:172] (0xc0004cc5a0) (3) Data frame handling\nI0522 10:51:43.160327 85 log.go:172] (0xc0004cc5a0) (3) Data frame sent\nI0522 10:51:43.160342 85 log.go:172] (0xc000154790) Data frame received for 3\nI0522 10:51:43.160355 85 log.go:172] (0xc0004cc5a0) (3) Data frame handling\nI0522 10:51:43.162386 85 log.go:172] (0xc000154790) Data frame received for 1\nI0522 10:51:43.162401 85 log.go:172] (0xc00072a780) (1) Data frame handling\nI0522 10:51:43.162410 85 log.go:172] (0xc00072a780) (1) Data frame sent\nI0522 10:51:43.162426 85 log.go:172] (0xc000154790) (0xc00072a780) Stream removed, broadcasting: 1\nI0522 10:51:43.162553 85 log.go:172] (0xc000154790) Go away received\nI0522 10:51:43.162577 85 log.go:172] (0xc000154790) (0xc00072a780) Stream removed, broadcasting: 1\nI0522 10:51:43.162609 85 log.go:172] (0xc000154790) (0xc0004cc5a0) Stream removed, broadcasting: 3\nI0522 10:51:43.162629 85 log.go:172] (0xc000154790) (0xc00072a820) Stream removed, broadcasting: 5\n" May 22 10:51:43.168: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 10:51:43.168: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 10:51:43.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 10:51:43.367: INFO: stderr: "I0522 10:51:43.291628 106 log.go:172] (0xc00014c630) (0xc000622640) Create stream\nI0522 10:51:43.291694 106 log.go:172] (0xc00014c630) (0xc000622640) Stream added, broadcasting: 1\nI0522 10:51:43.295096 106 log.go:172] (0xc00014c630) Reply frame received for 1\nI0522 10:51:43.295142 106 log.go:172] (0xc00014c630) (0xc0006226e0) Create stream\nI0522 10:51:43.295167 106 log.go:172] (0xc00014c630) (0xc0006226e0) Stream added, broadcasting: 3\nI0522 10:51:43.296360 106 log.go:172] (0xc00014c630) Reply frame received for 3\nI0522 10:51:43.296393 106 log.go:172] (0xc00014c630) (0xc0006acbe0) Create stream\nI0522 10:51:43.296403 106 log.go:172] (0xc00014c630) (0xc0006acbe0) Stream added, broadcasting: 5\nI0522 10:51:43.297813 106 log.go:172] (0xc00014c630) Reply frame received for 5\nI0522 10:51:43.360204 106 log.go:172] (0xc00014c630) Data frame received for 3\nI0522 10:51:43.360240 106 log.go:172] (0xc0006226e0) (3) Data frame handling\nI0522 10:51:43.360254 106 log.go:172] (0xc0006226e0) (3) Data frame sent\nI0522 10:51:43.360263 106 log.go:172] (0xc00014c630) Data frame received for 3\nI0522 10:51:43.360276 106 log.go:172] (0xc0006226e0) (3) Data frame handling\nI0522 10:51:43.360364 106 log.go:172] (0xc00014c630) Data frame received for 5\nI0522 10:51:43.360390 106 log.go:172] (0xc0006acbe0) (5) Data frame handling\nI0522 10:51:43.360413 106 log.go:172] (0xc0006acbe0) (5) Data frame sent\nI0522 10:51:43.360428 106 log.go:172] (0xc00014c630) Data frame received for 5\nI0522 10:51:43.360440 106 log.go:172] (0xc0006acbe0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0522 10:51:43.362160 106 log.go:172] (0xc00014c630) Data frame received for 1\nI0522 10:51:43.362186 106 log.go:172] (0xc000622640) (1) Data frame handling\nI0522 10:51:43.362208 106 log.go:172] (0xc000622640) (1) Data frame sent\nI0522 10:51:43.362244 106 log.go:172] (0xc00014c630) (0xc000622640) Stream removed, broadcasting: 1\nI0522 10:51:43.362497 106 log.go:172] (0xc00014c630) (0xc000622640) Stream removed, broadcasting: 1\nI0522 10:51:43.362524 106 log.go:172] (0xc00014c630) (0xc0006226e0) Stream removed, broadcasting: 3\nI0522 10:51:43.362556 106 log.go:172] (0xc00014c630) Go away received\nI0522 10:51:43.362713 106 log.go:172] (0xc00014c630) (0xc0006acbe0) Stream removed, broadcasting: 5\n" May 22 10:51:43.367: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 10:51:43.367: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 10:51:43.371: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 22 10:51:53.375: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 22 10:51:53.375: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 22 10:51:53.375: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 22 10:51:53.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 10:51:53.611: INFO: stderr: "I0522 10:51:53.506400 128 log.go:172] (0xc000138630) (0xc0006e8640) Create stream\nI0522 10:51:53.506460 128 log.go:172] (0xc000138630) (0xc0006e8640) Stream added, broadcasting: 1\nI0522 10:51:53.509969 128 log.go:172] (0xc000138630) Reply frame received for 1\nI0522 10:51:53.510012 128 log.go:172] (0xc000138630) (0xc0006e86e0) Create stream\nI0522 10:51:53.510023 128 log.go:172] (0xc000138630) (0xc0006e86e0) Stream added, broadcasting: 3\nI0522 10:51:53.510919 128 log.go:172] (0xc000138630) Reply frame received for 3\nI0522 10:51:53.510962 128 log.go:172] (0xc000138630) (0xc0005bedc0) Create stream\nI0522 10:51:53.510992 128 log.go:172] (0xc000138630) (0xc0005bedc0) Stream added, broadcasting: 5\nI0522 10:51:53.511939 128 log.go:172] (0xc000138630) Reply frame received for 5\nI0522 10:51:53.605577 128 log.go:172] (0xc000138630) Data frame received for 5\nI0522 10:51:53.605624 128 log.go:172] (0xc0005bedc0) (5) Data frame handling\nI0522 10:51:53.605652 128 log.go:172] (0xc000138630) Data frame received for 3\nI0522 10:51:53.605664 128 log.go:172] (0xc0006e86e0) (3) Data frame handling\nI0522 10:51:53.605677 128 log.go:172] (0xc0006e86e0) (3) Data frame sent\nI0522 10:51:53.605697 128 log.go:172] (0xc000138630) Data frame received for 3\nI0522 10:51:53.605710 128 log.go:172] (0xc0006e86e0) (3) Data frame handling\nI0522 10:51:53.607043 128 log.go:172] (0xc000138630) Data frame received for 1\nI0522 10:51:53.607056 128 log.go:172] (0xc0006e8640) (1) Data frame handling\nI0522 10:51:53.607063 128 log.go:172] (0xc0006e8640) (1) Data frame sent\nI0522 10:51:53.607070 128 log.go:172] (0xc000138630) (0xc0006e8640) Stream removed, broadcasting: 1\nI0522 10:51:53.607237 128 log.go:172] (0xc000138630) (0xc0006e8640) Stream removed, broadcasting: 1\nI0522 10:51:53.607257 128 log.go:172] (0xc000138630) (0xc0006e86e0) Stream removed, broadcasting: 3\nI0522 10:51:53.607266 128 log.go:172] (0xc000138630) (0xc0005bedc0) Stream removed, broadcasting: 5\n" May 22 10:51:53.611: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 10:51:53.611: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 10:51:53.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 10:51:53.848: INFO: stderr: "I0522 10:51:53.729783 151 log.go:172] (0xc0006b6420) (0xc0002cf4a0) Create stream\nI0522 10:51:53.729834 151 log.go:172] (0xc0006b6420) (0xc0002cf4a0) Stream added, broadcasting: 1\nI0522 10:51:53.732488 151 log.go:172] (0xc0006b6420) Reply frame received for 1\nI0522 10:51:53.732529 151 log.go:172] (0xc0006b6420) (0xc00042a000) Create stream\nI0522 10:51:53.732541 151 log.go:172] (0xc0006b6420) (0xc00042a000) Stream added, broadcasting: 3\nI0522 10:51:53.733761 151 log.go:172] (0xc0006b6420) Reply frame received for 3\nI0522 10:51:53.733816 151 log.go:172] (0xc0006b6420) (0xc0002cf540) Create stream\nI0522 10:51:53.733839 151 log.go:172] (0xc0006b6420) (0xc0002cf540) Stream added, broadcasting: 5\nI0522 10:51:53.734679 151 log.go:172] (0xc0006b6420) Reply frame received for 5\nI0522 10:51:53.840476 151 log.go:172] (0xc0006b6420) Data frame received for 3\nI0522 10:51:53.840496 151 log.go:172] (0xc00042a000) (3) Data frame handling\nI0522 10:51:53.840507 151 log.go:172] (0xc00042a000) (3) Data frame sent\nI0522 10:51:53.840751 151 log.go:172] (0xc0006b6420) Data frame received for 3\nI0522 10:51:53.840766 151 log.go:172] (0xc00042a000) (3) Data frame handling\nI0522 10:51:53.841553 151 log.go:172] (0xc0006b6420) Data frame received for 5\nI0522 10:51:53.841581 151 log.go:172] (0xc0002cf540) (5) Data frame handling\nI0522 10:51:53.843484 151 log.go:172] (0xc0006b6420) Data frame received for 1\nI0522 10:51:53.843567 151 log.go:172] (0xc0002cf4a0) (1) Data frame handling\nI0522 10:51:53.843649 151 log.go:172] (0xc0002cf4a0) (1) Data frame sent\nI0522 10:51:53.843800 151 log.go:172] (0xc0006b6420) (0xc0002cf4a0) Stream removed, broadcasting: 1\nI0522 10:51:53.843873 151 log.go:172] (0xc0006b6420) Go away received\nI0522 10:51:53.844181 151 log.go:172] (0xc0006b6420) (0xc0002cf4a0) Stream removed, broadcasting: 1\nI0522 10:51:53.844203 151 log.go:172] (0xc0006b6420) (0xc00042a000) Stream removed, broadcasting: 3\nI0522 10:51:53.844215 151 log.go:172] (0xc0006b6420) (0xc0002cf540) Stream removed, broadcasting: 5\n" May 22 10:51:53.848: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 10:51:53.848: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 10:51:53.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ft6zj ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 10:51:54.048: INFO: stderr: "I0522 10:51:53.966225 173 log.go:172] (0xc0007d8160) (0xc000438e60) Create stream\nI0522 10:51:53.966296 173 log.go:172] (0xc0007d8160) (0xc000438e60) Stream added, broadcasting: 1\nI0522 10:51:53.968858 173 log.go:172] (0xc0007d8160) Reply frame received for 1\nI0522 10:51:53.968890 173 log.go:172] (0xc0007d8160) (0xc000688000) Create stream\nI0522 10:51:53.968899 173 log.go:172] (0xc0007d8160) (0xc000688000) Stream added, broadcasting: 3\nI0522 10:51:53.969984 173 log.go:172] (0xc0007d8160) Reply frame received for 3\nI0522 10:51:53.970023 173 log.go:172] (0xc0007d8160) (0xc000866000) Create stream\nI0522 10:51:53.970036 173 log.go:172] (0xc0007d8160) (0xc000866000) Stream added, broadcasting: 5\nI0522 10:51:53.971001 173 log.go:172] (0xc0007d8160) Reply frame received for 5\nI0522 10:51:54.040464 173 log.go:172] (0xc0007d8160) Data frame received for 5\nI0522 10:51:54.040498 173 log.go:172] (0xc000866000) (5) Data frame handling\nI0522 10:51:54.040535 173 log.go:172] (0xc0007d8160) Data frame received for 3\nI0522 10:51:54.040581 173 log.go:172] (0xc000688000) (3) Data frame handling\nI0522 10:51:54.040606 173 log.go:172] (0xc000688000) (3) Data frame sent\nI0522 10:51:54.040631 173 log.go:172] (0xc0007d8160) Data frame received for 3\nI0522 10:51:54.040645 173 log.go:172] (0xc000688000) (3) Data frame handling\nI0522 10:51:54.042828 173 log.go:172] (0xc0007d8160) Data frame received for 1\nI0522 10:51:54.042868 173 log.go:172] (0xc000438e60) (1) Data frame handling\nI0522 10:51:54.042889 173 log.go:172] (0xc000438e60) (1) Data frame sent\nI0522 10:51:54.042907 173 log.go:172] (0xc0007d8160) (0xc000438e60) Stream removed, broadcasting: 1\nI0522 10:51:54.042946 173 log.go:172] (0xc0007d8160) Go away received\nI0522 10:51:54.043199 173 log.go:172] (0xc0007d8160) (0xc000438e60) Stream removed, broadcasting: 1\nI0522 10:51:54.043231 173 log.go:172] (0xc0007d8160) (0xc000688000) Stream removed, broadcasting: 3\nI0522 10:51:54.043245 173 log.go:172] (0xc0007d8160) (0xc000866000) Stream removed, broadcasting: 5\n" May 22 10:51:54.048: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 10:51:54.048: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 10:51:54.048: INFO: Waiting for statefulset status.replicas updated to 0 May 22 10:51:54.053: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 22 10:52:04.062: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 10:52:04.062: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 22 10:52:04.062: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 22 10:52:04.083: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:04.083: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC }] May 22 10:52:04.083: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:04.083: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:04.083: INFO: May 22 10:52:04.083: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 10:52:05.087: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:05.088: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC }] May 22 10:52:05.088: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:05.088: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:05.088: INFO: May 22 10:52:05.088: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 10:52:06.384: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:06.384: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC }] May 22 10:52:06.384: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:06.384: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:06.384: INFO: May 22 10:52:06.384: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 10:52:07.515: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:07.515: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:12 +0000 UTC }] May 22 10:52:07.515: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:07.515: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:07.515: INFO: May 22 10:52:07.515: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 10:52:08.683: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:08.683: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:08.683: INFO: May 22 10:52:08.683: INFO: StatefulSet ss has not reached scale 0, at 1 May 22 10:52:09.853: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:09.853: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:09.853: INFO: May 22 10:52:09.853: INFO: StatefulSet ss has not reached scale 0, at 1 May 22 10:52:11.263: INFO: POD NODE PHASE GRACE CONDITIONS May 22 10:52:11.264: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:51:32 +0000 UTC }] May 22 10:52:11.264: INFO: May 22 10:52:11.264: INFO: StatefulSet ss has not reached scale 0, at 1 May 22 10:52:12.916: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.804452585s May 22 10:52:13.981: INFO: Verifying statefulset ss doesn't scale past 0 for another 151.829065ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-ft6zj May 22 10:52:15.375: INFO: Scaling statefulset ss to 0 May 22 10:52:15.601: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 22 10:52:15.609: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ft6zj May 22 10:52:15.611: INFO: Scaling statefulset ss to 0 May 22 10:52:15.619: INFO: Waiting for statefulset status.replicas updated to 0 May 22 10:52:15.622: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:52:15.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-ft6zj" for this suite. May 22 10:52:23.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:52:23.364: INFO: namespace: e2e-tests-statefulset-ft6zj, resource: bindings, ignored listing per whitelist May 22 10:52:23.392: INFO: namespace e2e-tests-statefulset-ft6zj deletion completed in 7.748800066s • [SLOW TEST:71.532 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:52:23.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 10:52:24.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-7vlvc" to be "success or failure" May 22 10:52:24.730: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 249.585333ms May 22 10:52:26.734: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253896757s May 22 10:52:28.755: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274318058s May 22 10:52:30.970: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489479894s May 22 10:52:33.211: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 8.73024992s May 22 10:52:35.725: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.244503805s STEP: Saw pod success May 22 10:52:35.725: INFO: Pod "downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:52:35.784: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 10:52:35.978: INFO: Waiting for pod downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018 to disappear May 22 10:52:36.561: INFO: Pod downwardapi-volume-51d9e783-9c1a-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:52:36.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7vlvc" for this suite. May 22 10:52:42.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:52:42.940: INFO: namespace: e2e-tests-downward-api-7vlvc, resource: bindings, ignored listing per whitelist May 22 10:52:42.941: INFO: namespace e2e-tests-downward-api-7vlvc deletion completed in 6.376881546s • [SLOW TEST:19.549 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:52:42.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5d511a0e-9c1a-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 10:52:44.255: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-7rjh7" to be "success or failure" May 22 10:52:44.446: INFO: Pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 190.761024ms May 22 10:52:46.487: INFO: Pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231454509s May 22 10:52:48.533: INFO: Pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277703665s May 22 10:52:50.545: INFO: Pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.289896642s May 22 10:52:52.575: INFO: Pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.31961968s STEP: Saw pod success May 22 10:52:52.575: INFO: Pod "pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:52:52.578: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 10:52:52.845: INFO: Waiting for pod pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018 to disappear May 22 10:52:52.982: INFO: Pod pod-projected-configmaps-5d5670b0-9c1a-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:52:52.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7rjh7" for this suite. May 22 10:52:59.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:52:59.051: INFO: namespace: e2e-tests-projected-7rjh7, resource: bindings, ignored listing per whitelist May 22 10:52:59.079: INFO: namespace e2e-tests-projected-7rjh7 deletion completed in 6.09322737s • [SLOW TEST:16.137 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:52:59.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 22 10:52:59.212: INFO: Waiting up to 5m0s for pod "pod-668f10ca-9c1a-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-z6t8w" to be "success or failure" May 22 10:52:59.215: INFO: Pod "pod-668f10ca-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.27693ms May 22 10:53:01.220: INFO: Pod "pod-668f10ca-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007755582s May 22 10:53:03.223: INFO: Pod "pod-668f10ca-9c1a-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011435173s STEP: Saw pod success May 22 10:53:03.223: INFO: Pod "pod-668f10ca-9c1a-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:53:03.226: INFO: Trying to get logs from node hunter-worker2 pod pod-668f10ca-9c1a-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 10:53:03.331: INFO: Waiting for pod pod-668f10ca-9c1a-11ea-8e9c-0242ac110018 to disappear May 22 10:53:03.341: INFO: Pod pod-668f10ca-9c1a-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:53:03.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z6t8w" for this suite. May 22 10:53:09.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:53:09.436: INFO: namespace: e2e-tests-emptydir-z6t8w, resource: bindings, ignored listing per whitelist May 22 10:53:09.442: INFO: namespace e2e-tests-emptydir-z6t8w deletion completed in 6.097834056s • [SLOW TEST:10.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:53:09.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-bpqp STEP: Creating a pod to test atomic-volume-subpath May 22 10:53:09.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bpqp" in namespace "e2e-tests-subpath-csgr2" to be "success or failure" May 22 10:53:09.582: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24478ms May 22 10:53:11.587: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015210122s May 22 10:53:13.591: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019451463s May 22 10:53:15.595: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023242483s May 22 10:53:17.598: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 8.02691289s May 22 10:53:19.603: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 10.03126363s May 22 10:53:21.607: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 12.035883776s May 22 10:53:23.612: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 14.040450851s May 22 10:53:25.616: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 16.044966942s May 22 10:53:27.620: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 18.048959149s May 22 10:53:29.625: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 20.053282796s May 22 10:53:31.629: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 22.057265477s May 22 10:53:33.633: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 24.061796212s May 22 10:53:35.636: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Running", Reason="", readiness=false. Elapsed: 26.065181455s May 22 10:53:37.641: INFO: Pod "pod-subpath-test-configmap-bpqp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.069816013s STEP: Saw pod success May 22 10:53:37.641: INFO: Pod "pod-subpath-test-configmap-bpqp" satisfied condition "success or failure" May 22 10:53:37.644: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-bpqp container test-container-subpath-configmap-bpqp: STEP: delete the pod May 22 10:53:37.736: INFO: Waiting for pod pod-subpath-test-configmap-bpqp to disappear May 22 10:53:37.869: INFO: Pod pod-subpath-test-configmap-bpqp no longer exists STEP: Deleting pod pod-subpath-test-configmap-bpqp May 22 10:53:37.869: INFO: Deleting pod "pod-subpath-test-configmap-bpqp" in namespace "e2e-tests-subpath-csgr2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:53:37.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-csgr2" for this suite. May 22 10:53:45.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:53:45.995: INFO: namespace: e2e-tests-subpath-csgr2, resource: bindings, ignored listing per whitelist May 22 10:53:45.995: INFO: namespace e2e-tests-subpath-csgr2 deletion completed in 8.120413915s • [SLOW TEST:36.553 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:53:45.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 22 10:53:46.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4t99v' May 22 10:53:48.667: INFO: stderr: "" May 22 10:53:48.667: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 22 10:53:49.672: INFO: Selector matched 1 pods for map[app:redis] May 22 10:53:49.672: INFO: Found 0 / 1 May 22 10:53:50.672: INFO: Selector matched 1 pods for map[app:redis] May 22 10:53:50.673: INFO: Found 0 / 1 May 22 10:53:51.672: INFO: Selector matched 1 pods for map[app:redis] May 22 10:53:51.672: INFO: Found 0 / 1 May 22 10:53:52.672: INFO: Selector matched 1 pods for map[app:redis] May 22 10:53:52.673: INFO: Found 1 / 1 May 22 10:53:52.673: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 22 10:53:52.676: INFO: Selector matched 1 pods for map[app:redis] May 22 10:53:52.676: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 22 10:53:52.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8cjjr redis-master --namespace=e2e-tests-kubectl-4t99v' May 22 10:53:52.802: INFO: stderr: "" May 22 10:53:52.802: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 May 10:53:51.672 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 May 10:53:51.672 # Server started, Redis version 3.2.12\n1:M 22 May 10:53:51.672 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 May 10:53:51.672 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 22 10:53:52.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8cjjr redis-master --namespace=e2e-tests-kubectl-4t99v --tail=1' May 22 10:53:52.917: INFO: stderr: "" May 22 10:53:52.917: INFO: stdout: "1:M 22 May 10:53:51.672 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 22 10:53:52.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8cjjr redis-master --namespace=e2e-tests-kubectl-4t99v --limit-bytes=1' May 22 10:53:53.041: INFO: stderr: "" May 22 10:53:53.041: INFO: stdout: " " STEP: exposing timestamps May 22 10:53:53.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8cjjr redis-master --namespace=e2e-tests-kubectl-4t99v --tail=1 --timestamps' May 22 10:53:53.152: INFO: stderr: "" May 22 10:53:53.152: INFO: stdout: "2020-05-22T10:53:51.67256349Z 1:M 22 May 10:53:51.672 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 22 10:53:55.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8cjjr redis-master --namespace=e2e-tests-kubectl-4t99v --since=1s' May 22 10:53:55.766: INFO: stderr: "" May 22 10:53:55.766: INFO: stdout: "" May 22 10:53:55.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8cjjr redis-master --namespace=e2e-tests-kubectl-4t99v --since=24h' May 22 10:53:55.874: INFO: stderr: "" May 22 10:53:55.874: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 May 10:53:51.672 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 May 10:53:51.672 # Server started, Redis version 3.2.12\n1:M 22 May 10:53:51.672 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 May 10:53:51.672 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 22 10:53:55.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4t99v' May 22 10:53:56.003: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 10:53:56.003: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 22 10:53:56.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-4t99v' May 22 10:53:56.106: INFO: stderr: "No resources found.\n" May 22 10:53:56.106: INFO: stdout: "" May 22 10:53:56.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-4t99v -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 10:53:56.348: INFO: stderr: "" May 22 10:53:56.348: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:53:56.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4t99v" for this suite. May 22 10:54:18.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:54:18.455: INFO: namespace: e2e-tests-kubectl-4t99v, resource: bindings, ignored listing per whitelist May 22 10:54:18.476: INFO: namespace e2e-tests-kubectl-4t99v deletion completed in 22.123967834s • [SLOW TEST:32.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:54:18.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-x9pv STEP: Creating a pod to test atomic-volume-subpath May 22 10:54:18.644: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x9pv" in namespace "e2e-tests-subpath-frz2g" to be "success or failure" May 22 10:54:18.662: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Pending", Reason="", readiness=false. Elapsed: 17.79234ms May 22 10:54:20.665: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020974002s May 22 10:54:22.669: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024157923s May 22 10:54:24.954: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309154652s May 22 10:54:26.958: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313768671s May 22 10:54:28.961: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 10.316873982s May 22 10:54:30.966: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 12.321794485s May 22 10:54:32.969: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 14.325035838s May 22 10:54:34.972: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 16.327899156s May 22 10:54:36.981: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 18.336503285s May 22 10:54:38.985: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 20.340738051s May 22 10:54:40.989: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 22.345022641s May 22 10:54:42.994: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 24.349424793s May 22 10:54:44.997: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Running", Reason="", readiness=false. Elapsed: 26.352900275s May 22 10:54:47.002: INFO: Pod "pod-subpath-test-configmap-x9pv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.357311841s STEP: Saw pod success May 22 10:54:47.002: INFO: Pod "pod-subpath-test-configmap-x9pv" satisfied condition "success or failure" May 22 10:54:47.005: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-x9pv container test-container-subpath-configmap-x9pv: STEP: delete the pod May 22 10:54:47.046: INFO: Waiting for pod pod-subpath-test-configmap-x9pv to disappear May 22 10:54:47.229: INFO: Pod pod-subpath-test-configmap-x9pv no longer exists STEP: Deleting pod pod-subpath-test-configmap-x9pv May 22 10:54:47.229: INFO: Deleting pod "pod-subpath-test-configmap-x9pv" in namespace "e2e-tests-subpath-frz2g" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:54:47.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-frz2g" for this suite. May 22 10:54:53.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:54:53.306: INFO: namespace: e2e-tests-subpath-frz2g, resource: bindings, ignored listing per whitelist May 22 10:54:53.338: INFO: namespace e2e-tests-subpath-frz2g deletion completed in 6.103658751s • [SLOW TEST:34.862 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:54:53.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 22 10:54:53.698: INFO: Waiting up to 5m0s for pod "pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-6pv2x" to be "success or failure" May 22 10:54:53.703: INFO: Pod "pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.539796ms May 22 10:54:55.707: INFO: Pod "pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00942145s May 22 10:54:58.111: INFO: Pod "pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413774026s May 22 10:55:00.115: INFO: Pod "pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.41749989s STEP: Saw pod success May 22 10:55:00.115: INFO: Pod "pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:55:00.118: INFO: Trying to get logs from node hunter-worker2 pod pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 10:55:00.232: INFO: Waiting for pod pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018 to disappear May 22 10:55:00.267: INFO: Pod pod-aac4ce90-9c1a-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:55:00.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6pv2x" for this suite. May 22 10:55:06.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:55:06.547: INFO: namespace: e2e-tests-emptydir-6pv2x, resource: bindings, ignored listing per whitelist May 22 10:55:06.655: INFO: namespace e2e-tests-emptydir-6pv2x deletion completed in 6.183944429s • [SLOW TEST:13.317 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:55:06.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:55:06.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8hv8p" for this suite. May 22 10:55:29.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:55:29.138: INFO: namespace: e2e-tests-pods-8hv8p, resource: bindings, ignored listing per whitelist May 22 10:55:29.178: INFO: namespace e2e-tests-pods-8hv8p deletion completed in 22.180111277s • [SLOW TEST:22.523 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:55:29.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:55:33.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-bkvbq" for this suite. May 22 10:56:23.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:56:23.442: INFO: namespace: e2e-tests-kubelet-test-bkvbq, resource: bindings, ignored listing per whitelist May 22 10:56:23.442: INFO: namespace e2e-tests-kubelet-test-bkvbq deletion completed in 50.097982034s • [SLOW TEST:54.264 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:56:23.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0522 10:56:54.134937 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 10:56:54.134: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:56:54.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z8pdf" for this suite. May 22 10:57:02.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:57:02.222: INFO: namespace: e2e-tests-gc-z8pdf, resource: bindings, ignored listing per whitelist May 22 10:57:02.230: INFO: namespace e2e-tests-gc-z8pdf deletion completed in 8.09209999s • [SLOW TEST:38.787 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:57:02.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f79a9597-9c1a-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 10:57:02.565: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-x2kkn" to be "success or failure" May 22 10:57:02.575: INFO: Pod "pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.540924ms May 22 10:57:04.579: INFO: Pod "pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013614019s May 22 10:57:06.583: INFO: Pod "pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017975389s STEP: Saw pod success May 22 10:57:06.583: INFO: Pod "pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:57:06.585: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 10:57:06.712: INFO: Waiting for pod pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018 to disappear May 22 10:57:06.747: INFO: Pod pod-projected-configmaps-f79c3741-9c1a-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:57:06.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x2kkn" for this suite. May 22 10:57:12.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:57:12.816: INFO: namespace: e2e-tests-projected-x2kkn, resource: bindings, ignored listing per whitelist May 22 10:57:12.860: INFO: namespace e2e-tests-projected-x2kkn deletion completed in 6.108877155s • [SLOW TEST:10.631 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:57:12.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 10:57:12.949: INFO: Creating deployment "nginx-deployment" May 22 10:57:12.963: INFO: Waiting for observed generation 1 May 22 10:57:14.971: INFO: Waiting for all required pods to come up May 22 10:57:14.976: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 22 10:57:26.983: INFO: Waiting for deployment "nginx-deployment" to complete May 22 10:57:26.988: INFO: Updating deployment "nginx-deployment" with a non-existent image May 22 10:57:26.996: INFO: Updating deployment nginx-deployment May 22 10:57:26.996: INFO: Waiting for observed generation 2 May 22 10:57:29.224: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 22 10:57:29.435: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 22 10:57:29.456: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 22 10:57:29.462: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 22 10:57:29.462: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 22 10:57:29.464: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 22 10:57:29.466: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 22 10:57:29.466: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 22 10:57:29.471: INFO: Updating deployment nginx-deployment May 22 10:57:29.471: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 22 10:57:30.130: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 22 10:57:30.357: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 22 10:57:32.476: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v7t7h/deployments/nginx-deployment,UID:fdcef48a-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909379,Generation:3,CreationTimestamp:2020-05-22 10:57:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-22 10:57:29 +0000 UTC 2020-05-22 10:57:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-22 10:57:30 +0000 UTC 2020-05-22 10:57:12 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 22 10:57:32.579: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v7t7h/replicasets/nginx-deployment-5c98f8fb5,UID:062e62d3-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909373,Generation:3,CreationTimestamp:2020-05-22 10:57:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fdcef48a-9c1a-11ea-99e8-0242ac110002 0xc000ac8117 0xc000ac8118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 10:57:32.579: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 22 10:57:32.580: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v7t7h/replicasets/nginx-deployment-85ddf47c5d,UID:fdd21fe9-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909353,Generation:3,CreationTimestamp:2020-05-22 10:57:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fdcef48a-9c1a-11ea-99e8-0242ac110002 0xc000ac81d7 0xc000ac81d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 22 10:57:32.587: INFO: Pod "nginx-deployment-5c98f8fb5-45lq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-45lq2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-45lq2,UID:062f2490-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909267,Generation:0,CreationTimestamp:2020-05-22 10:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100c657 0xc00100c658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100c6d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100c6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.587: INFO: Pod "nginx-deployment-5c98f8fb5-4lkt5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4lkt5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-4lkt5,UID:080d1987-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909416,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100c7b7 0xc00100c7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100c830} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100c850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.587: INFO: Pod "nginx-deployment-5c98f8fb5-5rxb9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5rxb9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-5rxb9,UID:06301d33-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909273,Generation:0,CreationTimestamp:2020-05-22 10:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100c917 0xc00100c918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100c990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100c9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.588: INFO: Pod "nginx-deployment-5c98f8fb5-f229c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f229c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-f229c,UID:06301a47-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909286,Generation:0,CreationTimestamp:2020-05-22 10:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100ca77 0xc00100ca78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100caf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100cb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.588: INFO: Pod "nginx-deployment-5c98f8fb5-grwhh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-grwhh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-grwhh,UID:07e983bc-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909351,Generation:0,CreationTimestamp:2020-05-22 10:57:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100cbd7 0xc00100cbd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100cc50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100cc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.588: INFO: Pod "nginx-deployment-5c98f8fb5-hgk9z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hgk9z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-hgk9z,UID:08179ac9-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909350,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100cd37 0xc00100cd38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100cdb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100cdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.588: INFO: Pod "nginx-deployment-5c98f8fb5-jtmww" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jtmww,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-jtmww,UID:08179d52-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909422,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100ce47 0xc00100ce48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100cec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100cee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.588: INFO: Pod "nginx-deployment-5c98f8fb5-ktnjb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ktnjb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-ktnjb,UID:081777e2-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909421,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100cfa7 0xc00100cfa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100d030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100d050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.588: INFO: Pod "nginx-deployment-5c98f8fb5-ltwbz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ltwbz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-ltwbz,UID:08179100-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909347,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100d127 0xc00100d128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100d3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100d3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.589: INFO: Pod "nginx-deployment-5c98f8fb5-n2wm4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n2wm4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-n2wm4,UID:080cfe20-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909413,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100d437 0xc00100d438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100d4b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100d4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.589: INFO: Pod "nginx-deployment-5c98f8fb5-ph7d7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ph7d7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-ph7d7,UID:0663ea2d-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909289,Generation:0,CreationTimestamp:2020-05-22 10:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100d5b7 0xc00100d5b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100d630} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100d650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.589: INFO: Pod "nginx-deployment-5c98f8fb5-tvsmq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tvsmq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-tvsmq,UID:065d7dde-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909288,Generation:0,CreationTimestamp:2020-05-22 10:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100d717 0xc00100d718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100d790} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100d7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.589: INFO: Pod "nginx-deployment-5c98f8fb5-zdmcp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zdmcp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-5c98f8fb5-zdmcp,UID:08303356-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909357,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 062e62d3-9c1b-11ea-99e8-0242ac110002 0xc00100d877 0xc00100d878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100d950} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100d970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.590: INFO: Pod "nginx-deployment-85ddf47c5d-2bqfv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2bqfv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-2bqfv,UID:fddc1ca8-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909193,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc00100d9e7 0xc00100d9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100da60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100da80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.171,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://91f0f7e64a7e8293bf465ae4e55517d051a05db769dee1cbfead7b4e91fa2ed6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.590: INFO: Pod "nginx-deployment-85ddf47c5d-2rvks" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2rvks,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-2rvks,UID:fde1b03d-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909224,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc00100db47 0xc00100db48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100dbc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100dbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.147,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://691b5e877b329e468d4691520043fc2c14a6146aa3b953b34b1e4c590650a16e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.590: INFO: Pod "nginx-deployment-85ddf47c5d-74vkp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-74vkp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-74vkp,UID:08177649-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909344,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc00100dca7 0xc00100dca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00100dd20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00100dd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.590: INFO: Pod "nginx-deployment-85ddf47c5d-7qhsc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7qhsc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-7qhsc,UID:fde18d8b-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909221,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc00100dfd7 0xc00100dfd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.146,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ecddf0bb8110f87f4726282528a47c329d83ec84fd2233ff4a3157cda79c2fcb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.590: INFO: Pod "nginx-deployment-85ddf47c5d-89n5f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-89n5f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-89n5f,UID:080d02c1-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909380,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc0017841c7 0xc0017841c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784240} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.590: INFO: Pod "nginx-deployment-85ddf47c5d-96smh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-96smh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-96smh,UID:fddbef2f-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909204,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc0017843c7 0xc0017843c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017844e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.172,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e949ef546b0ddcc153c1c10d00a233e74990b322ec52b2b68821528a8c73fa7d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.591: INFO: Pod "nginx-deployment-85ddf47c5d-9z6vz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9z6vz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-9z6vz,UID:080d0739-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909386,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc0017845a7 0xc0017845a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017846d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017846f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.591: INFO: Pod "nginx-deployment-85ddf47c5d-bbkzw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bbkzw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-bbkzw,UID:fddaa2b9-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909195,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc0017847a7 0xc0017847a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784820} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.143,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4d58592f53ee9cae2643e42742ee4b91682e5df0284fd20314c19149b1747fd7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.591: INFO: Pod "nginx-deployment-85ddf47c5d-dvmlk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dvmlk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-dvmlk,UID:fddd40de-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909203,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc0017849e7 0xc0017849e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.145,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8dea092fdb38d402fac009e22d9f35bde4181792915e0a560cff290a917efc18}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.591: INFO: Pod "nginx-deployment-85ddf47c5d-fggt9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fggt9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-fggt9,UID:fddd3e03-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909228,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001784bb7 0xc001784bb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.174,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://eed6db8ba4af8f6bf9bc03abf127062071c5ab73a120733e6c5d6b37ba467a09}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.591: INFO: Pod "nginx-deployment-85ddf47c5d-gd5bx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gd5bx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-gd5bx,UID:08179791-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909345,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001784d17 0xc001784d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-lczdv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lczdv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-lczdv,UID:080cf7be-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909388,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001784e37 0xc001784e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001784eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001784ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-lkrj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lkrj2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-lkrj2,UID:07e99a98-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909375,Generation:0,CreationTimestamp:2020-05-22 10:57:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001785757 0xc001785758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017857d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001785800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-mm4g6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mm4g6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-mm4g6,UID:081787c5-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909346,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001785937 0xc001785938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017859b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017859e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-mzdhq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mzdhq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-mzdhq,UID:080cdf5d-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909368,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001785a57 0xc001785a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001785b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001785b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-q24vv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q24vv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-q24vv,UID:08175ed2-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909343,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001785bf7 0xc001785bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001785ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001785cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-qhvqx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qhvqx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-qhvqx,UID:081791e6-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909342,Generation:0,CreationTimestamp:2020-05-22 10:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001785d37 0xc001785d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001785db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001785dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-tscp7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tscp7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-tscp7,UID:07dd2393-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909352,Generation:0,CreationTimestamp:2020-05-22 10:57:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc001785ed7 0xc001785ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001785f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001785f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-vf5dv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vf5dv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-vf5dv,UID:07e99692-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11909364,Generation:0,CreationTimestamp:2020-05-22 10:57:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc00176c047 0xc00176c048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00176c0c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00176c0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-22 10:57:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 10:57:32.592: INFO: Pod "nginx-deployment-85ddf47c5d-zch5b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zch5b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-v7t7h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v7t7h/pods/nginx-deployment-85ddf47c5d-zch5b,UID:fddd3a16-9c1a-11ea-99e8-0242ac110002,ResourceVersion:11909190,Generation:0,CreationTimestamp:2020-05-22 10:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fdd21fe9-9c1a-11ea-99e8-0242ac110002 0xc00176c197 0xc00176c198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z6wkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z6wkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z6wkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00176c210} {node.kubernetes.io/unreachable Exists NoExecute 0xc00176c230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 10:57:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.144,StartTime:2020-05-22 10:57:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 10:57:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1ddae1f1ccb3f2dc94d1288aebb1b6cf442445967289d2c5f58c72d84e208af0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:57:32.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-v7t7h" for this suite. May 22 10:57:53.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:57:53.195: INFO: namespace: e2e-tests-deployment-v7t7h, resource: bindings, ignored listing per whitelist May 22 10:57:53.248: INFO: namespace e2e-tests-deployment-v7t7h deletion completed in 20.651908091s • [SLOW TEST:40.388 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:57:53.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:57:54.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-q2w6p" for this suite. May 22 10:58:00.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:58:01.102: INFO: namespace: e2e-tests-kubelet-test-q2w6p, resource: bindings, ignored listing per whitelist May 22 10:58:01.124: INFO: namespace e2e-tests-kubelet-test-q2w6p deletion completed in 6.501572721s • [SLOW TEST:7.876 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:58:01.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 10:58:01.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 22 10:58:02.645: INFO: stderr: "" May 22 10:58:02.645: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:58:02.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-29n6m" for this suite. May 22 10:58:08.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:58:08.841: INFO: namespace: e2e-tests-kubectl-29n6m, resource: bindings, ignored listing per whitelist May 22 10:58:08.898: INFO: namespace e2e-tests-kubectl-29n6m deletion completed in 6.119877068s • [SLOW TEST:7.773 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:58:08.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-77j2l/configmap-test-1f77c4de-9c1b-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 10:58:09.604: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-77j2l" to be "success or failure" May 22 10:58:09.651: INFO: Pod "pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.12149ms May 22 10:58:11.700: INFO: Pod "pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095247982s May 22 10:58:13.704: INFO: Pod "pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.099191246s May 22 10:58:15.711: INFO: Pod "pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.106975568s STEP: Saw pod success May 22 10:58:15.711: INFO: Pod "pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:58:15.714: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018 container env-test: STEP: delete the pod May 22 10:58:15.744: INFO: Waiting for pod pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018 to disappear May 22 10:58:15.756: INFO: Pod pod-configmaps-1f92b7b0-9c1b-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:58:15.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-77j2l" for this suite. May 22 10:58:23.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:58:23.918: INFO: namespace: e2e-tests-configmap-77j2l, resource: bindings, ignored listing per whitelist May 22 10:58:23.954: INFO: namespace e2e-tests-configmap-77j2l deletion completed in 8.19603156s • [SLOW TEST:15.057 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:58:23.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 10:58:24.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-p4vjd" to be "success or failure" May 22 10:58:24.242: INFO: Pod "downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.535801ms May 22 10:58:26.245: INFO: Pod "downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013034858s May 22 10:58:28.248: INFO: Pod "downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.016398951s May 22 10:58:30.253: INFO: Pod "downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020882282s STEP: Saw pod success May 22 10:58:30.253: INFO: Pod "downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 10:58:30.256: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 10:58:30.297: INFO: Waiting for pod downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018 to disappear May 22 10:58:30.314: INFO: Pod downwardapi-volume-2847d582-9c1b-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 10:58:30.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p4vjd" for this suite. May 22 10:58:36.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 10:58:36.416: INFO: namespace: e2e-tests-projected-p4vjd, resource: bindings, ignored listing per whitelist May 22 10:58:36.450: INFO: namespace e2e-tests-projected-p4vjd deletion completed in 6.132573539s • [SLOW TEST:12.495 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 10:58:36.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-7pxml May 22 10:58:40.600: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-7pxml STEP: checking the pod's current state and verifying that restartCount is present May 22 10:58:40.604: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:02:41.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7pxml" for this suite. May 22 11:02:47.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:02:47.732: INFO: namespace: e2e-tests-container-probe-7pxml, resource: bindings, ignored listing per whitelist May 22 11:02:47.739: INFO: namespace e2e-tests-container-probe-7pxml deletion completed in 6.08378158s • [SLOW TEST:251.289 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:02:47.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 22 11:02:47.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:48.162: INFO: stderr: "" May 22 11:02:48.162: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 11:02:48.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:48.280: INFO: stderr: "" May 22 11:02:48.280: INFO: stdout: "update-demo-nautilus-7wj4h update-demo-nautilus-shpmz " May 22 11:02:48.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wj4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:48.380: INFO: stderr: "" May 22 11:02:48.380: INFO: stdout: "" May 22 11:02:48.380: INFO: update-demo-nautilus-7wj4h is created but not running May 22 11:02:53.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:53.483: INFO: stderr: "" May 22 11:02:53.483: INFO: stdout: "update-demo-nautilus-7wj4h update-demo-nautilus-shpmz " May 22 11:02:53.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wj4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:53.591: INFO: stderr: "" May 22 11:02:53.591: INFO: stdout: "true" May 22 11:02:53.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wj4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:53.682: INFO: stderr: "" May 22 11:02:53.682: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:02:53.682: INFO: validating pod update-demo-nautilus-7wj4h May 22 11:02:53.713: INFO: got data: { "image": "nautilus.jpg" } May 22 11:02:53.714: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:02:53.714: INFO: update-demo-nautilus-7wj4h is verified up and running May 22 11:02:53.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shpmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:53.817: INFO: stderr: "" May 22 11:02:53.817: INFO: stdout: "true" May 22 11:02:53.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shpmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:53.920: INFO: stderr: "" May 22 11:02:53.920: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:02:53.920: INFO: validating pod update-demo-nautilus-shpmz May 22 11:02:53.930: INFO: got data: { "image": "nautilus.jpg" } May 22 11:02:53.930: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:02:53.930: INFO: update-demo-nautilus-shpmz is verified up and running STEP: using delete to clean up resources May 22 11:02:53.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:54.041: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:02:54.041: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 22 11:02:54.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jll9n' May 22 11:02:54.157: INFO: stderr: "No resources found.\n" May 22 11:02:54.157: INFO: stdout: "" May 22 11:02:54.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-jll9n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 11:02:54.269: INFO: stderr: "" May 22 11:02:54.269: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:02:54.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jll9n" for this suite. May 22 11:03:16.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:03:16.345: INFO: namespace: e2e-tests-kubectl-jll9n, resource: bindings, ignored listing per whitelist May 22 11:03:16.365: INFO: namespace e2e-tests-kubectl-jll9n deletion completed in 22.091992026s • [SLOW TEST:28.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:03:16.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0522 11:03:27.991257 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 11:03:27.991: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:03:27.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-m58mz" for this suite. May 22 11:03:38.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:03:38.024: INFO: namespace: e2e-tests-gc-m58mz, resource: bindings, ignored listing per whitelist May 22 11:03:38.155: INFO: namespace e2e-tests-gc-m58mz deletion completed in 10.16020116s • [SLOW TEST:21.790 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:03:38.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:03:38.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-mbp7p" to be "success or failure" May 22 11:03:38.277: INFO: Pod "downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.898996ms May 22 11:03:40.281: INFO: Pod "downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006982211s May 22 11:03:42.285: INFO: Pod "downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010799293s STEP: Saw pod success May 22 11:03:42.285: INFO: Pod "downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:03:42.288: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:03:42.359: INFO: Waiting for pod downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018 to disappear May 22 11:03:42.402: INFO: Pod downwardapi-volume-e3787806-9c1b-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:03:42.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mbp7p" for this suite. May 22 11:03:48.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:03:48.584: INFO: namespace: e2e-tests-projected-mbp7p, resource: bindings, ignored listing per whitelist May 22 11:03:48.630: INFO: namespace e2e-tests-projected-mbp7p deletion completed in 6.091663474s • [SLOW TEST:10.475 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:03:48.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:03:48.712: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 22 11:03:48.734: INFO: Pod name sample-pod: Found 0 pods out of 1 May 22 11:03:53.738: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 22 11:03:53.738: INFO: Creating deployment "test-rolling-update-deployment" May 22 11:03:53.742: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 22 11:03:53.749: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 22 11:03:55.864: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 22 11:03:55.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725742233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725742233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725742233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725742233, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 11:03:57.877: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 22 11:03:57.886: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-kxzzm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kxzzm/deployments/test-rolling-update-deployment,UID:ecb29eee-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11910791,Generation:1,CreationTimestamp:2020-05-22 11:03:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-22 11:03:53 +0000 UTC 2020-05-22 11:03:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-22 11:03:57 +0000 UTC 2020-05-22 11:03:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 22 11:03:57.890: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-kxzzm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kxzzm/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ecb4c899-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11910781,Generation:1,CreationTimestamp:2020-05-22 11:03:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ecb29eee-9c1b-11ea-99e8-0242ac110002 0xc00235aee7 0xc00235aee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 22 11:03:57.890: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 22 11:03:57.890: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-kxzzm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kxzzm/replicasets/test-rolling-update-controller,UID:e9b3a6f5-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11910790,Generation:2,CreationTimestamp:2020-05-22 11:03:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ecb29eee-9c1b-11ea-99e8-0242ac110002 0xc00235ae27 0xc00235ae28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 11:03:57.893: INFO: Pod "test-rolling-update-deployment-75db98fb4c-5t4hz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-5t4hz,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-kxzzm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kxzzm/pods/test-rolling-update-deployment-75db98fb4c-5t4hz,UID:ecbb1cb6-9c1b-11ea-99e8-0242ac110002,ResourceVersion:11910780,Generation:0,CreationTimestamp:2020-05-22 11:03:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ecb4c899-9c1b-11ea-99e8-0242ac110002 0xc00235b957 0xc00235b958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6j9r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6j9r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x6j9r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00235b9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00235b9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 11:03:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 11:03:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 11:03:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 11:03:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.168,StartTime:2020-05-22 11:03:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-22 11:03:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://81347c57415a8732d99d6433320bbabd18c291ddd5e0cf5c2738fb1b02a8f89e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:03:57.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kxzzm" for this suite. May 22 11:04:05.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:04:05.942: INFO: namespace: e2e-tests-deployment-kxzzm, resource: bindings, ignored listing per whitelist May 22 11:04:05.988: INFO: namespace e2e-tests-deployment-kxzzm deletion completed in 8.09175388s • [SLOW TEST:17.358 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:04:05.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:04:10.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-kdb2x" for this suite. May 22 11:04:52.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:04:52.222: INFO: namespace: e2e-tests-kubelet-test-kdb2x, resource: bindings, ignored listing per whitelist May 22 11:04:52.289: INFO: namespace e2e-tests-kubelet-test-kdb2x deletion completed in 42.142593326s • [SLOW TEST:46.301 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:04:52.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 22 11:04:52.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 22 11:04:54.922: INFO: stderr: "" May 22 11:04:54.922: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:04:54.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f6c9l" for this suite. May 22 11:05:00.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:05:01.032: INFO: namespace: e2e-tests-kubectl-f6c9l, resource: bindings, ignored listing per whitelist May 22 11:05:01.059: INFO: namespace e2e-tests-kubectl-f6c9l deletion completed in 6.134792809s • [SLOW TEST:8.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:05:01.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 22 11:05:05.677: INFO: Successfully updated pod "labelsupdate14de13c8-9c1c-11ea-8e9c-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:05:09.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-696x4" for this suite. May 22 11:05:31.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:05:31.755: INFO: namespace: e2e-tests-projected-696x4, resource: bindings, ignored listing per whitelist May 22 11:05:31.815: INFO: namespace e2e-tests-projected-696x4 deletion completed in 22.091578602s • [SLOW TEST:30.756 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:05:31.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-273b203d-9c1c-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:05:31.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-dhhpr" to be "success or failure" May 22 11:05:31.959: INFO: Pod "pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873447ms May 22 11:05:33.963: INFO: Pod "pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007832592s May 22 11:05:35.967: INFO: Pod "pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011824867s STEP: Saw pod success May 22 11:05:35.967: INFO: Pod "pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:05:35.970: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 11:05:35.998: INFO: Waiting for pod pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:05:36.007: INFO: Pod pod-projected-configmaps-273bad3b-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:05:36.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dhhpr" for this suite. May 22 11:05:42.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:05:42.076: INFO: namespace: e2e-tests-projected-dhhpr, resource: bindings, ignored listing per whitelist May 22 11:05:42.090: INFO: namespace e2e-tests-projected-dhhpr deletion completed in 6.080086379s • [SLOW TEST:10.275 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:05:42.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-lfvt9 I0522 11:05:42.177864 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-lfvt9, replica count: 1 I0522 11:05:43.228328 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 11:05:44.228547 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 11:05:45.228812 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 11:05:46.229022 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 22 11:05:46.383: INFO: Created: latency-svc-j4x6f May 22 11:05:46.392: INFO: Got endpoints: latency-svc-j4x6f [63.073939ms] May 22 11:05:46.465: INFO: Created: latency-svc-tv6b8 May 22 11:05:46.520: INFO: Got endpoints: latency-svc-tv6b8 [127.662651ms] May 22 11:05:46.550: INFO: Created: latency-svc-jj559 May 22 11:05:46.562: INFO: Got endpoints: latency-svc-jj559 [169.081346ms] May 22 11:05:46.591: INFO: Created: latency-svc-srkj7 May 22 11:05:46.605: INFO: Got endpoints: latency-svc-srkj7 [212.078851ms] May 22 11:05:46.664: INFO: Created: latency-svc-997gc May 22 11:05:46.676: INFO: Got endpoints: latency-svc-997gc [282.70083ms] May 22 11:05:46.714: INFO: Created: latency-svc-cr6tn May 22 11:05:46.724: INFO: Got endpoints: latency-svc-cr6tn [331.384341ms] May 22 11:05:46.748: INFO: Created: latency-svc-lg485 May 22 11:05:46.761: INFO: Got endpoints: latency-svc-lg485 [368.107782ms] May 22 11:05:46.821: INFO: Created: latency-svc-dkmbc May 22 11:05:46.824: INFO: Got endpoints: latency-svc-dkmbc [431.035701ms] May 22 11:05:46.849: INFO: Created: latency-svc-4kg8t May 22 11:05:46.863: INFO: Got endpoints: latency-svc-4kg8t [470.303448ms] May 22 11:05:46.887: INFO: Created: latency-svc-rllhl May 22 11:05:46.951: INFO: Got endpoints: latency-svc-rllhl [557.998705ms] May 22 11:05:46.970: INFO: Created: latency-svc-drvxz May 22 11:05:46.984: INFO: Got endpoints: latency-svc-drvxz [591.079646ms] May 22 11:05:47.011: INFO: Created: latency-svc-lhvbq May 22 11:05:47.026: INFO: Got endpoints: latency-svc-lhvbq [633.045114ms] May 22 11:05:47.090: INFO: Created: latency-svc-fdkmr May 22 11:05:47.093: INFO: Got endpoints: latency-svc-fdkmr [699.374969ms] May 22 11:05:47.120: INFO: Created: latency-svc-2pbvn May 22 11:05:47.135: INFO: Got endpoints: latency-svc-2pbvn [741.59939ms] May 22 11:05:47.156: INFO: Created: latency-svc-f7n2p May 22 11:05:47.165: INFO: Got endpoints: latency-svc-f7n2p [771.735706ms] May 22 11:05:47.233: INFO: Created: latency-svc-thx5f May 22 11:05:47.238: INFO: Got endpoints: latency-svc-thx5f [844.51037ms] May 22 11:05:47.330: INFO: Created: latency-svc-nnmg4 May 22 11:05:47.379: INFO: Got endpoints: latency-svc-nnmg4 [858.529642ms] May 22 11:05:47.407: INFO: Created: latency-svc-mndtj May 22 11:05:47.424: INFO: Got endpoints: latency-svc-mndtj [861.370643ms] May 22 11:05:47.449: INFO: Created: latency-svc-sbq5w May 22 11:05:47.502: INFO: Got endpoints: latency-svc-sbq5w [896.993082ms] May 22 11:05:47.522: INFO: Created: latency-svc-wxj5l May 22 11:05:47.551: INFO: Got endpoints: latency-svc-wxj5l [875.586855ms] May 22 11:05:47.583: INFO: Created: latency-svc-cl4qb May 22 11:05:47.592: INFO: Got endpoints: latency-svc-cl4qb [867.664933ms] May 22 11:05:47.642: INFO: Created: latency-svc-hwk7l May 22 11:05:47.647: INFO: Got endpoints: latency-svc-hwk7l [885.307266ms] May 22 11:05:47.677: INFO: Created: latency-svc-hhqp4 May 22 11:05:47.689: INFO: Got endpoints: latency-svc-hhqp4 [864.89932ms] May 22 11:05:47.720: INFO: Created: latency-svc-prc5w May 22 11:05:47.790: INFO: Got endpoints: latency-svc-prc5w [925.991735ms] May 22 11:05:47.804: INFO: Created: latency-svc-hdzls May 22 11:05:47.816: INFO: Got endpoints: latency-svc-hdzls [864.580909ms] May 22 11:05:47.839: INFO: Created: latency-svc-w5lql May 22 11:05:47.851: INFO: Got endpoints: latency-svc-w5lql [867.286041ms] May 22 11:05:47.874: INFO: Created: latency-svc-gmcxt May 22 11:05:47.918: INFO: Got endpoints: latency-svc-gmcxt [891.298627ms] May 22 11:05:47.961: INFO: Created: latency-svc-9r7xj May 22 11:05:47.984: INFO: Got endpoints: latency-svc-9r7xj [891.766623ms] May 22 11:05:48.006: INFO: Created: latency-svc-62r85 May 22 11:05:48.053: INFO: Got endpoints: latency-svc-62r85 [918.739788ms] May 22 11:05:48.072: INFO: Created: latency-svc-7gsq7 May 22 11:05:48.088: INFO: Got endpoints: latency-svc-7gsq7 [923.431607ms] May 22 11:05:48.110: INFO: Created: latency-svc-phphk May 22 11:05:48.124: INFO: Got endpoints: latency-svc-phphk [885.765963ms] May 22 11:05:48.148: INFO: Created: latency-svc-rkctx May 22 11:05:48.215: INFO: Got endpoints: latency-svc-rkctx [836.419152ms] May 22 11:05:48.218: INFO: Created: latency-svc-sbcpv May 22 11:05:48.222: INFO: Got endpoints: latency-svc-sbcpv [798.023688ms] May 22 11:05:48.265: INFO: Created: latency-svc-kmhkk May 22 11:05:48.280: INFO: Got endpoints: latency-svc-kmhkk [777.495395ms] May 22 11:05:48.373: INFO: Created: latency-svc-cwg4r May 22 11:05:48.394: INFO: Got endpoints: latency-svc-cwg4r [842.954977ms] May 22 11:05:48.423: INFO: Created: latency-svc-bwpx5 May 22 11:05:48.430: INFO: Got endpoints: latency-svc-bwpx5 [837.848313ms] May 22 11:05:48.498: INFO: Created: latency-svc-c86rd May 22 11:05:48.517: INFO: Got endpoints: latency-svc-c86rd [870.623057ms] May 22 11:05:48.560: INFO: Created: latency-svc-gdlbg May 22 11:05:48.587: INFO: Got endpoints: latency-svc-gdlbg [898.219233ms] May 22 11:05:48.634: INFO: Created: latency-svc-mrnjc May 22 11:05:48.637: INFO: Got endpoints: latency-svc-mrnjc [847.711765ms] May 22 11:05:48.691: INFO: Created: latency-svc-j5s7b May 22 11:05:48.722: INFO: Got endpoints: latency-svc-j5s7b [906.215274ms] May 22 11:05:48.778: INFO: Created: latency-svc-cmntt May 22 11:05:48.781: INFO: Got endpoints: latency-svc-cmntt [929.557453ms] May 22 11:05:48.811: INFO: Created: latency-svc-cwlmd May 22 11:05:48.828: INFO: Got endpoints: latency-svc-cwlmd [909.958771ms] May 22 11:05:48.847: INFO: Created: latency-svc-tjctd May 22 11:05:48.864: INFO: Got endpoints: latency-svc-tjctd [879.288398ms] May 22 11:05:48.922: INFO: Created: latency-svc-lzb48 May 22 11:05:48.949: INFO: Got endpoints: latency-svc-lzb48 [895.397727ms] May 22 11:05:48.968: INFO: Created: latency-svc-hfbc4 May 22 11:05:48.978: INFO: Got endpoints: latency-svc-hfbc4 [889.844727ms] May 22 11:05:48.998: INFO: Created: latency-svc-828hn May 22 11:05:49.009: INFO: Got endpoints: latency-svc-828hn [885.249385ms] May 22 11:05:49.066: INFO: Created: latency-svc-db6j7 May 22 11:05:49.069: INFO: Got endpoints: latency-svc-db6j7 [853.674003ms] May 22 11:05:49.093: INFO: Created: latency-svc-nfz57 May 22 11:05:49.105: INFO: Got endpoints: latency-svc-nfz57 [883.400918ms] May 22 11:05:49.124: INFO: Created: latency-svc-pm4cq May 22 11:05:49.142: INFO: Got endpoints: latency-svc-pm4cq [862.582296ms] May 22 11:05:49.160: INFO: Created: latency-svc-mgm4d May 22 11:05:49.203: INFO: Got endpoints: latency-svc-mgm4d [808.498568ms] May 22 11:05:49.227: INFO: Created: latency-svc-hcq4s May 22 11:05:49.232: INFO: Got endpoints: latency-svc-hcq4s [801.919232ms] May 22 11:05:49.261: INFO: Created: latency-svc-6c59j May 22 11:05:49.353: INFO: Got endpoints: latency-svc-6c59j [835.868521ms] May 22 11:05:49.357: INFO: Created: latency-svc-4km5w May 22 11:05:49.365: INFO: Got endpoints: latency-svc-4km5w [777.741194ms] May 22 11:05:49.388: INFO: Created: latency-svc-pf5s2 May 22 11:05:49.401: INFO: Got endpoints: latency-svc-pf5s2 [763.794815ms] May 22 11:05:49.944: INFO: Created: latency-svc-hfd7s May 22 11:05:49.952: INFO: Got endpoints: latency-svc-hfd7s [1.230314384s] May 22 11:05:50.491: INFO: Created: latency-svc-msvpq May 22 11:05:50.504: INFO: Got endpoints: latency-svc-msvpq [1.72315657s] May 22 11:05:50.531: INFO: Created: latency-svc-5qtfl May 22 11:05:50.546: INFO: Got endpoints: latency-svc-5qtfl [1.718325563s] May 22 11:05:51.114: INFO: Created: latency-svc-ljh5z May 22 11:05:51.127: INFO: Got endpoints: latency-svc-ljh5z [2.263405942s] May 22 11:05:51.712: INFO: Created: latency-svc-9khxf May 22 11:05:51.715: INFO: Got endpoints: latency-svc-9khxf [2.76553613s] May 22 11:05:52.319: INFO: Created: latency-svc-tvsqw May 22 11:05:52.338: INFO: Got endpoints: latency-svc-tvsqw [3.360046068s] May 22 11:05:52.864: INFO: Created: latency-svc-f6z5k May 22 11:05:52.909: INFO: Got endpoints: latency-svc-f6z5k [3.900563692s] May 22 11:05:53.491: INFO: Created: latency-svc-trtld May 22 11:05:53.495: INFO: Got endpoints: latency-svc-trtld [4.425766416s] May 22 11:05:53.518: INFO: Created: latency-svc-wqr6c May 22 11:05:53.529: INFO: Got endpoints: latency-svc-wqr6c [4.423987531s] May 22 11:05:54.210: INFO: Created: latency-svc-hvgw8 May 22 11:05:54.715: INFO: Got endpoints: latency-svc-hvgw8 [5.57324646s] May 22 11:05:54.971: INFO: Created: latency-svc-dbnsw May 22 11:05:54.999: INFO: Got endpoints: latency-svc-dbnsw [5.796033751s] May 22 11:05:56.036: INFO: Created: latency-svc-29l7b May 22 11:05:56.376: INFO: Got endpoints: latency-svc-29l7b [1.660767907s] May 22 11:05:56.606: INFO: Created: latency-svc-dtncm May 22 11:05:56.637: INFO: Got endpoints: latency-svc-dtncm [7.405339168s] May 22 11:05:57.192: INFO: Created: latency-svc-mdpkr May 22 11:05:57.202: INFO: Got endpoints: latency-svc-mdpkr [7.848414045s] May 22 11:05:57.789: INFO: Created: latency-svc-44bck May 22 11:05:57.849: INFO: Got endpoints: latency-svc-44bck [8.484332601s] May 22 11:05:58.355: INFO: Created: latency-svc-sjjpn May 22 11:05:58.358: INFO: Got endpoints: latency-svc-sjjpn [8.956835575s] May 22 11:05:58.438: INFO: Created: latency-svc-tjsrx May 22 11:05:58.447: INFO: Got endpoints: latency-svc-tjsrx [8.495083958s] May 22 11:05:58.489: INFO: Created: latency-svc-zh7kg May 22 11:05:58.503: INFO: Got endpoints: latency-svc-zh7kg [7.998394285s] May 22 11:05:59.077: INFO: Created: latency-svc-rjcth May 22 11:05:59.119: INFO: Got endpoints: latency-svc-rjcth [8.572964058s] May 22 11:05:59.635: INFO: Created: latency-svc-lgb2c May 22 11:05:59.653: INFO: Got endpoints: latency-svc-lgb2c [8.525883047s] May 22 11:06:00.209: INFO: Created: latency-svc-958tz May 22 11:06:00.222: INFO: Got endpoints: latency-svc-958tz [8.507522688s] May 22 11:06:00.246: INFO: Created: latency-svc-v2pgd May 22 11:06:00.259: INFO: Got endpoints: latency-svc-v2pgd [7.920165743s] May 22 11:06:00.282: INFO: Created: latency-svc-94hwh May 22 11:06:00.295: INFO: Got endpoints: latency-svc-94hwh [7.385508336s] May 22 11:06:00.347: INFO: Created: latency-svc-n5ffq May 22 11:06:00.355: INFO: Got endpoints: latency-svc-n5ffq [6.860375969s] May 22 11:06:00.383: INFO: Created: latency-svc-z5xkl May 22 11:06:00.400: INFO: Got endpoints: latency-svc-z5xkl [6.871032004s] May 22 11:06:00.432: INFO: Created: latency-svc-w8vq6 May 22 11:06:00.446: INFO: Got endpoints: latency-svc-w8vq6 [5.446854979s] May 22 11:06:00.497: INFO: Created: latency-svc-jswmz May 22 11:06:00.507: INFO: Got endpoints: latency-svc-jswmz [4.130442895s] May 22 11:06:00.527: INFO: Created: latency-svc-96dql May 22 11:06:00.550: INFO: Got endpoints: latency-svc-96dql [3.912846245s] May 22 11:06:00.587: INFO: Created: latency-svc-xsxg7 May 22 11:06:00.646: INFO: Got endpoints: latency-svc-xsxg7 [3.444394453s] May 22 11:06:00.648: INFO: Created: latency-svc-lnxdh May 22 11:06:00.657: INFO: Got endpoints: latency-svc-lnxdh [2.807052566s] May 22 11:06:00.678: INFO: Created: latency-svc-b29xd May 22 11:06:00.693: INFO: Got endpoints: latency-svc-b29xd [2.335151664s] May 22 11:06:00.713: INFO: Created: latency-svc-4kkjh May 22 11:06:00.723: INFO: Got endpoints: latency-svc-4kkjh [2.275633193s] May 22 11:06:00.743: INFO: Created: latency-svc-lb2hk May 22 11:06:00.814: INFO: Got endpoints: latency-svc-lb2hk [2.31094646s] May 22 11:06:00.840: INFO: Created: latency-svc-6bpf7 May 22 11:06:00.856: INFO: Got endpoints: latency-svc-6bpf7 [1.736721924s] May 22 11:06:00.882: INFO: Created: latency-svc-ld8hp May 22 11:06:00.898: INFO: Got endpoints: latency-svc-ld8hp [1.245239262s] May 22 11:06:00.963: INFO: Created: latency-svc-xdz2z May 22 11:06:00.970: INFO: Got endpoints: latency-svc-xdz2z [747.895691ms] May 22 11:06:01.008: INFO: Created: latency-svc-bq6z6 May 22 11:06:01.025: INFO: Got endpoints: latency-svc-bq6z6 [766.203119ms] May 22 11:06:01.120: INFO: Created: latency-svc-bf7l7 May 22 11:06:01.150: INFO: Got endpoints: latency-svc-bf7l7 [855.243178ms] May 22 11:06:01.188: INFO: Created: latency-svc-vshdg May 22 11:06:01.205: INFO: Got endpoints: latency-svc-vshdg [850.066989ms] May 22 11:06:01.287: INFO: Created: latency-svc-vtph9 May 22 11:06:01.291: INFO: Got endpoints: latency-svc-vtph9 [890.499304ms] May 22 11:06:01.325: INFO: Created: latency-svc-bpxcd May 22 11:06:01.351: INFO: Got endpoints: latency-svc-bpxcd [904.625346ms] May 22 11:06:01.374: INFO: Created: latency-svc-sgmpd May 22 11:06:01.436: INFO: Got endpoints: latency-svc-sgmpd [929.537349ms] May 22 11:06:01.440: INFO: Created: latency-svc-lhcdk May 22 11:06:01.447: INFO: Got endpoints: latency-svc-lhcdk [896.411641ms] May 22 11:06:01.468: INFO: Created: latency-svc-2ldg9 May 22 11:06:01.483: INFO: Got endpoints: latency-svc-2ldg9 [837.105568ms] May 22 11:06:01.512: INFO: Created: latency-svc-j6nvq May 22 11:06:01.526: INFO: Got endpoints: latency-svc-j6nvq [869.226883ms] May 22 11:06:01.584: INFO: Created: latency-svc-t2mrh May 22 11:06:01.602: INFO: Got endpoints: latency-svc-t2mrh [909.124443ms] May 22 11:06:01.646: INFO: Created: latency-svc-mhc6g May 22 11:06:01.665: INFO: Got endpoints: latency-svc-mhc6g [941.317908ms] May 22 11:06:01.730: INFO: Created: latency-svc-rqxcp May 22 11:06:01.737: INFO: Got endpoints: latency-svc-rqxcp [923.279675ms] May 22 11:06:01.768: INFO: Created: latency-svc-tm5rh May 22 11:06:01.778: INFO: Got endpoints: latency-svc-tm5rh [922.061728ms] May 22 11:06:01.816: INFO: Created: latency-svc-nhdks May 22 11:06:01.861: INFO: Got endpoints: latency-svc-nhdks [962.965652ms] May 22 11:06:01.877: INFO: Created: latency-svc-v7vjt May 22 11:06:01.893: INFO: Got endpoints: latency-svc-v7vjt [923.355877ms] May 22 11:06:01.914: INFO: Created: latency-svc-xh4cc May 22 11:06:01.931: INFO: Got endpoints: latency-svc-xh4cc [905.622128ms] May 22 11:06:01.956: INFO: Created: latency-svc-4k5lp May 22 11:06:01.994: INFO: Got endpoints: latency-svc-4k5lp [843.239309ms] May 22 11:06:02.004: INFO: Created: latency-svc-nshzb May 22 11:06:02.014: INFO: Got endpoints: latency-svc-nshzb [808.70098ms] May 22 11:06:02.039: INFO: Created: latency-svc-wchbn May 22 11:06:02.057: INFO: Got endpoints: latency-svc-wchbn [766.119593ms] May 22 11:06:02.074: INFO: Created: latency-svc-4pmf2 May 22 11:06:02.093: INFO: Got endpoints: latency-svc-4pmf2 [742.327329ms] May 22 11:06:02.144: INFO: Created: latency-svc-shzwp May 22 11:06:02.147: INFO: Got endpoints: latency-svc-shzwp [710.34112ms] May 22 11:06:02.172: INFO: Created: latency-svc-9f8xj May 22 11:06:02.183: INFO: Got endpoints: latency-svc-9f8xj [736.574016ms] May 22 11:06:02.207: INFO: Created: latency-svc-qfhj2 May 22 11:06:02.231: INFO: Got endpoints: latency-svc-qfhj2 [747.06433ms] May 22 11:06:02.287: INFO: Created: latency-svc-g85q9 May 22 11:06:02.290: INFO: Got endpoints: latency-svc-g85q9 [763.922111ms] May 22 11:06:02.311: INFO: Created: latency-svc-2t85c May 22 11:06:02.328: INFO: Got endpoints: latency-svc-2t85c [725.469605ms] May 22 11:06:02.353: INFO: Created: latency-svc-rkw7c May 22 11:06:02.380: INFO: Got endpoints: latency-svc-rkw7c [715.66672ms] May 22 11:06:02.437: INFO: Created: latency-svc-65mck May 22 11:06:02.439: INFO: Got endpoints: latency-svc-65mck [701.92088ms] May 22 11:06:02.466: INFO: Created: latency-svc-qcj8x May 22 11:06:02.479: INFO: Got endpoints: latency-svc-qcj8x [701.460461ms] May 22 11:06:02.503: INFO: Created: latency-svc-fzmt5 May 22 11:06:02.515: INFO: Got endpoints: latency-svc-fzmt5 [653.948816ms] May 22 11:06:02.586: INFO: Created: latency-svc-gpvn8 May 22 11:06:02.590: INFO: Got endpoints: latency-svc-gpvn8 [696.947819ms] May 22 11:06:02.627: INFO: Created: latency-svc-zgt8f May 22 11:06:02.642: INFO: Got endpoints: latency-svc-zgt8f [711.41087ms] May 22 11:06:02.669: INFO: Created: latency-svc-gs5jl May 22 11:06:02.684: INFO: Got endpoints: latency-svc-gs5jl [690.377804ms] May 22 11:06:02.730: INFO: Created: latency-svc-rfv8t May 22 11:06:02.733: INFO: Got endpoints: latency-svc-rfv8t [719.432226ms] May 22 11:06:02.767: INFO: Created: latency-svc-m5n7v May 22 11:06:02.777: INFO: Got endpoints: latency-svc-m5n7v [720.04984ms] May 22 11:06:02.801: INFO: Created: latency-svc-xn8jn May 22 11:06:02.817: INFO: Got endpoints: latency-svc-xn8jn [724.219961ms] May 22 11:06:02.874: INFO: Created: latency-svc-mlwtv May 22 11:06:02.877: INFO: Got endpoints: latency-svc-mlwtv [730.470672ms] May 22 11:06:02.922: INFO: Created: latency-svc-lwdcd May 22 11:06:02.938: INFO: Got endpoints: latency-svc-lwdcd [753.992668ms] May 22 11:06:02.965: INFO: Created: latency-svc-62lxw May 22 11:06:03.030: INFO: Got endpoints: latency-svc-62lxw [799.19979ms] May 22 11:06:03.031: INFO: Created: latency-svc-wn25w May 22 11:06:03.078: INFO: Got endpoints: latency-svc-wn25w [788.584162ms] May 22 11:06:03.186: INFO: Created: latency-svc-lprd5 May 22 11:06:03.190: INFO: Got endpoints: latency-svc-lprd5 [861.601843ms] May 22 11:06:03.252: INFO: Created: latency-svc-b2wrv May 22 11:06:03.268: INFO: Got endpoints: latency-svc-b2wrv [887.909504ms] May 22 11:06:03.371: INFO: Created: latency-svc-4hwcg May 22 11:06:03.403: INFO: Got endpoints: latency-svc-4hwcg [963.883367ms] May 22 11:06:03.440: INFO: Created: latency-svc-7jz52 May 22 11:06:03.455: INFO: Got endpoints: latency-svc-7jz52 [975.328018ms] May 22 11:06:03.504: INFO: Created: latency-svc-dzvzr May 22 11:06:03.533: INFO: Got endpoints: latency-svc-dzvzr [1.017794129s] May 22 11:06:03.594: INFO: Created: latency-svc-lhrg5 May 22 11:06:03.658: INFO: Got endpoints: latency-svc-lhrg5 [1.067304948s] May 22 11:06:03.660: INFO: Created: latency-svc-rg8x5 May 22 11:06:03.671: INFO: Got endpoints: latency-svc-rg8x5 [1.029400014s] May 22 11:06:03.714: INFO: Created: latency-svc-z4wfc May 22 11:06:03.756: INFO: Got endpoints: latency-svc-z4wfc [1.071637585s] May 22 11:06:03.826: INFO: Created: latency-svc-hb2h9 May 22 11:06:03.828: INFO: Got endpoints: latency-svc-hb2h9 [1.094507323s] May 22 11:06:03.857: INFO: Created: latency-svc-cb8t2 May 22 11:06:03.870: INFO: Got endpoints: latency-svc-cb8t2 [1.093224831s] May 22 11:06:03.893: INFO: Created: latency-svc-crqll May 22 11:06:03.918: INFO: Got endpoints: latency-svc-crqll [1.100421282s] May 22 11:06:03.982: INFO: Created: latency-svc-qgn4j May 22 11:06:03.984: INFO: Got endpoints: latency-svc-qgn4j [1.106931544s] May 22 11:06:04.013: INFO: Created: latency-svc-8mf4b May 22 11:06:04.027: INFO: Got endpoints: latency-svc-8mf4b [1.089376925s] May 22 11:06:04.049: INFO: Created: latency-svc-2d8zw May 22 11:06:04.063: INFO: Got endpoints: latency-svc-2d8zw [1.033393891s] May 22 11:06:04.125: INFO: Created: latency-svc-xfdtp May 22 11:06:04.129: INFO: Got endpoints: latency-svc-xfdtp [1.050275162s] May 22 11:06:04.158: INFO: Created: latency-svc-jmvsr May 22 11:06:04.176: INFO: Got endpoints: latency-svc-jmvsr [986.21215ms] May 22 11:06:04.200: INFO: Created: latency-svc-z2bqg May 22 11:06:04.219: INFO: Got endpoints: latency-svc-z2bqg [950.324105ms] May 22 11:06:04.265: INFO: Created: latency-svc-slcbk May 22 11:06:04.269: INFO: Got endpoints: latency-svc-slcbk [866.138716ms] May 22 11:06:04.301: INFO: Created: latency-svc-vdxsx May 22 11:06:04.328: INFO: Got endpoints: latency-svc-vdxsx [872.652445ms] May 22 11:06:04.350: INFO: Created: latency-svc-jlmlw May 22 11:06:04.418: INFO: Got endpoints: latency-svc-jlmlw [885.028764ms] May 22 11:06:04.422: INFO: Created: latency-svc-hzgn7 May 22 11:06:04.429: INFO: Got endpoints: latency-svc-hzgn7 [771.405593ms] May 22 11:06:04.454: INFO: Created: latency-svc-7dvj7 May 22 11:06:04.466: INFO: Got endpoints: latency-svc-7dvj7 [794.210657ms] May 22 11:06:04.486: INFO: Created: latency-svc-jdw89 May 22 11:06:04.495: INFO: Got endpoints: latency-svc-jdw89 [739.556299ms] May 22 11:06:04.518: INFO: Created: latency-svc-dqrhp May 22 11:06:04.574: INFO: Got endpoints: latency-svc-dqrhp [745.939863ms] May 22 11:06:04.596: INFO: Created: latency-svc-mwfbr May 22 11:06:04.610: INFO: Got endpoints: latency-svc-mwfbr [739.94165ms] May 22 11:06:04.631: INFO: Created: latency-svc-v5gh2 May 22 11:06:04.647: INFO: Got endpoints: latency-svc-v5gh2 [728.693542ms] May 22 11:06:04.667: INFO: Created: latency-svc-d7c7b May 22 11:06:04.718: INFO: Got endpoints: latency-svc-d7c7b [733.415799ms] May 22 11:06:04.728: INFO: Created: latency-svc-gt9pc May 22 11:06:04.743: INFO: Got endpoints: latency-svc-gt9pc [716.153766ms] May 22 11:06:04.771: INFO: Created: latency-svc-kvvts May 22 11:06:04.786: INFO: Got endpoints: latency-svc-kvvts [722.444552ms] May 22 11:06:04.816: INFO: Created: latency-svc-drz7j May 22 11:06:04.927: INFO: Got endpoints: latency-svc-drz7j [798.6635ms] May 22 11:06:04.929: INFO: Created: latency-svc-pm8hs May 22 11:06:04.936: INFO: Got endpoints: latency-svc-pm8hs [759.917728ms] May 22 11:06:04.975: INFO: Created: latency-svc-t624r May 22 11:06:04.992: INFO: Got endpoints: latency-svc-t624r [773.651704ms] May 22 11:06:05.015: INFO: Created: latency-svc-pp667 May 22 11:06:05.089: INFO: Got endpoints: latency-svc-pp667 [820.199208ms] May 22 11:06:05.092: INFO: Created: latency-svc-87589 May 22 11:06:05.105: INFO: Got endpoints: latency-svc-87589 [777.225325ms] May 22 11:06:05.177: INFO: Created: latency-svc-w6q75 May 22 11:06:05.269: INFO: Got endpoints: latency-svc-w6q75 [850.36572ms] May 22 11:06:05.271: INFO: Created: latency-svc-7bshn May 22 11:06:05.279: INFO: Got endpoints: latency-svc-7bshn [849.6738ms] May 22 11:06:05.339: INFO: Created: latency-svc-frn6r May 22 11:06:05.364: INFO: Got endpoints: latency-svc-frn6r [897.793303ms] May 22 11:06:05.419: INFO: Created: latency-svc-j6wrw May 22 11:06:05.423: INFO: Got endpoints: latency-svc-j6wrw [927.992427ms] May 22 11:06:05.455: INFO: Created: latency-svc-g7b9h May 22 11:06:05.478: INFO: Got endpoints: latency-svc-g7b9h [904.055949ms] May 22 11:06:05.501: INFO: Created: latency-svc-j7rk9 May 22 11:06:05.556: INFO: Got endpoints: latency-svc-j7rk9 [945.396502ms] May 22 11:06:05.599: INFO: Created: latency-svc-4sgjs May 22 11:06:05.628: INFO: Got endpoints: latency-svc-4sgjs [981.561667ms] May 22 11:06:05.654: INFO: Created: latency-svc-cr95v May 22 11:06:05.724: INFO: Got endpoints: latency-svc-cr95v [1.005623629s] May 22 11:06:05.731: INFO: Created: latency-svc-8zq97 May 22 11:06:05.737: INFO: Got endpoints: latency-svc-8zq97 [993.730657ms] May 22 11:06:05.773: INFO: Created: latency-svc-vw6sh May 22 11:06:05.779: INFO: Got endpoints: latency-svc-vw6sh [993.591321ms] May 22 11:06:05.802: INFO: Created: latency-svc-jggt5 May 22 11:06:05.885: INFO: Got endpoints: latency-svc-jggt5 [957.91769ms] May 22 11:06:05.899: INFO: Created: latency-svc-k65gg May 22 11:06:05.912: INFO: Got endpoints: latency-svc-k65gg [975.739917ms] May 22 11:06:05.935: INFO: Created: latency-svc-vl6r5 May 22 11:06:05.948: INFO: Got endpoints: latency-svc-vl6r5 [956.049725ms] May 22 11:06:05.970: INFO: Created: latency-svc-qn99x May 22 11:06:06.029: INFO: Got endpoints: latency-svc-qn99x [939.755638ms] May 22 11:06:06.054: INFO: Created: latency-svc-cpbhn May 22 11:06:06.084: INFO: Got endpoints: latency-svc-cpbhn [979.626247ms] May 22 11:06:06.115: INFO: Created: latency-svc-qbtq9 May 22 11:06:06.128: INFO: Got endpoints: latency-svc-qbtq9 [859.642632ms] May 22 11:06:06.186: INFO: Created: latency-svc-jdl7k May 22 11:06:06.189: INFO: Got endpoints: latency-svc-jdl7k [909.843048ms] May 22 11:06:06.216: INFO: Created: latency-svc-2d5fv May 22 11:06:06.225: INFO: Got endpoints: latency-svc-2d5fv [861.715416ms] May 22 11:06:06.247: INFO: Created: latency-svc-4nglv May 22 11:06:06.277: INFO: Got endpoints: latency-svc-4nglv [853.337456ms] May 22 11:06:06.335: INFO: Created: latency-svc-hxhfz May 22 11:06:06.339: INFO: Got endpoints: latency-svc-hxhfz [861.392563ms] May 22 11:06:06.359: INFO: Created: latency-svc-rfvd2 May 22 11:06:06.376: INFO: Got endpoints: latency-svc-rfvd2 [820.087475ms] May 22 11:06:06.395: INFO: Created: latency-svc-hflql May 22 11:06:06.414: INFO: Got endpoints: latency-svc-hflql [785.301082ms] May 22 11:06:06.485: INFO: Created: latency-svc-q9s4q May 22 11:06:06.487: INFO: Got endpoints: latency-svc-q9s4q [763.742866ms] May 22 11:06:06.517: INFO: Created: latency-svc-z259c May 22 11:06:06.529: INFO: Got endpoints: latency-svc-z259c [792.160582ms] May 22 11:06:06.551: INFO: Created: latency-svc-rfsdw May 22 11:06:06.573: INFO: Got endpoints: latency-svc-rfsdw [793.107338ms] May 22 11:06:06.623: INFO: Created: latency-svc-l5vrn May 22 11:06:06.637: INFO: Got endpoints: latency-svc-l5vrn [751.945676ms] May 22 11:06:06.661: INFO: Created: latency-svc-8mwpx May 22 11:06:06.715: INFO: Created: latency-svc-hshpw May 22 11:06:06.773: INFO: Created: latency-svc-bm2lc May 22 11:06:06.776: INFO: Got endpoints: latency-svc-8mwpx [864.367317ms] May 22 11:06:06.782: INFO: Got endpoints: latency-svc-bm2lc [752.518146ms] May 22 11:06:06.782: INFO: Got endpoints: latency-svc-hshpw [833.908793ms] May 22 11:06:06.803: INFO: Created: latency-svc-xj4x5 May 22 11:06:06.834: INFO: Got endpoints: latency-svc-xj4x5 [749.099054ms] May 22 11:06:06.928: INFO: Created: latency-svc-tgpwd May 22 11:06:06.959: INFO: Got endpoints: latency-svc-tgpwd [830.758089ms] May 22 11:06:06.989: INFO: Created: latency-svc-f2xgp May 22 11:06:07.005: INFO: Got endpoints: latency-svc-f2xgp [816.264018ms] May 22 11:06:07.077: INFO: Created: latency-svc-p2n7c May 22 11:06:07.083: INFO: Got endpoints: latency-svc-p2n7c [857.509902ms] May 22 11:06:07.124: INFO: Created: latency-svc-6nh85 May 22 11:06:07.143: INFO: Got endpoints: latency-svc-6nh85 [866.690906ms] May 22 11:06:07.265: INFO: Created: latency-svc-9d2rw May 22 11:06:07.265: INFO: Got endpoints: latency-svc-9d2rw [925.836388ms] May 22 11:06:07.303: INFO: Created: latency-svc-fpjfh May 22 11:06:07.331: INFO: Got endpoints: latency-svc-fpjfh [954.548736ms] May 22 11:06:07.419: INFO: Created: latency-svc-7pwxt May 22 11:06:07.421: INFO: Got endpoints: latency-svc-7pwxt [1.00786137s] May 22 11:06:07.463: INFO: Created: latency-svc-lhmh6 May 22 11:06:07.512: INFO: Got endpoints: latency-svc-lhmh6 [1.02510769s] May 22 11:06:07.513: INFO: Latencies: [127.662651ms 169.081346ms 212.078851ms 282.70083ms 331.384341ms 368.107782ms 431.035701ms 470.303448ms 557.998705ms 591.079646ms 633.045114ms 653.948816ms 690.377804ms 696.947819ms 699.374969ms 701.460461ms 701.92088ms 710.34112ms 711.41087ms 715.66672ms 716.153766ms 719.432226ms 720.04984ms 722.444552ms 724.219961ms 725.469605ms 728.693542ms 730.470672ms 733.415799ms 736.574016ms 739.556299ms 739.94165ms 741.59939ms 742.327329ms 745.939863ms 747.06433ms 747.895691ms 749.099054ms 751.945676ms 752.518146ms 753.992668ms 759.917728ms 763.742866ms 763.794815ms 763.922111ms 766.119593ms 766.203119ms 771.405593ms 771.735706ms 773.651704ms 777.225325ms 777.495395ms 777.741194ms 785.301082ms 788.584162ms 792.160582ms 793.107338ms 794.210657ms 798.023688ms 798.6635ms 799.19979ms 801.919232ms 808.498568ms 808.70098ms 816.264018ms 820.087475ms 820.199208ms 830.758089ms 833.908793ms 835.868521ms 836.419152ms 837.105568ms 837.848313ms 842.954977ms 843.239309ms 844.51037ms 847.711765ms 849.6738ms 850.066989ms 850.36572ms 853.337456ms 853.674003ms 855.243178ms 857.509902ms 858.529642ms 859.642632ms 861.370643ms 861.392563ms 861.601843ms 861.715416ms 862.582296ms 864.367317ms 864.580909ms 864.89932ms 866.138716ms 866.690906ms 867.286041ms 867.664933ms 869.226883ms 870.623057ms 872.652445ms 875.586855ms 879.288398ms 883.400918ms 885.028764ms 885.249385ms 885.307266ms 885.765963ms 887.909504ms 889.844727ms 890.499304ms 891.298627ms 891.766623ms 895.397727ms 896.411641ms 896.993082ms 897.793303ms 898.219233ms 904.055949ms 904.625346ms 905.622128ms 906.215274ms 909.124443ms 909.843048ms 909.958771ms 918.739788ms 922.061728ms 923.279675ms 923.355877ms 923.431607ms 925.836388ms 925.991735ms 927.992427ms 929.537349ms 929.557453ms 939.755638ms 941.317908ms 945.396502ms 950.324105ms 954.548736ms 956.049725ms 957.91769ms 962.965652ms 963.883367ms 975.328018ms 975.739917ms 979.626247ms 981.561667ms 986.21215ms 993.591321ms 993.730657ms 1.005623629s 1.00786137s 1.017794129s 1.02510769s 1.029400014s 1.033393891s 1.050275162s 1.067304948s 1.071637585s 1.089376925s 1.093224831s 1.094507323s 1.100421282s 1.106931544s 1.230314384s 1.245239262s 1.660767907s 1.718325563s 1.72315657s 1.736721924s 2.263405942s 2.275633193s 2.31094646s 2.335151664s 2.76553613s 2.807052566s 3.360046068s 3.444394453s 3.900563692s 3.912846245s 4.130442895s 4.423987531s 4.425766416s 5.446854979s 5.57324646s 5.796033751s 6.860375969s 6.871032004s 7.385508336s 7.405339168s 7.848414045s 7.920165743s 7.998394285s 8.484332601s 8.495083958s 8.507522688s 8.525883047s 8.572964058s 8.956835575s] May 22 11:06:07.513: INFO: 50 %ile: 872.652445ms May 22 11:06:07.513: INFO: 90 %ile: 3.912846245s May 22 11:06:07.513: INFO: 99 %ile: 8.572964058s May 22 11:06:07.513: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:06:07.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-lfvt9" for this suite. May 22 11:06:43.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:06:43.596: INFO: namespace: e2e-tests-svc-latency-lfvt9, resource: bindings, ignored listing per whitelist May 22 11:06:43.643: INFO: namespace e2e-tests-svc-latency-lfvt9 deletion completed in 36.090821394s • [SLOW TEST:61.552 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:06:43.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 22 11:06:43.740: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 11:06:43.770: INFO: Waiting for terminating namespaces to be deleted... May 22 11:06:43.772: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 22 11:06:43.779: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 22 11:06:43.779: INFO: Container kube-proxy ready: true, restart count 0 May 22 11:06:43.779: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 11:06:43.779: INFO: Container kindnet-cni ready: true, restart count 0 May 22 11:06:43.779: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 22 11:06:43.779: INFO: Container coredns ready: true, restart count 0 May 22 11:06:43.779: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 22 11:06:43.786: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 11:06:43.786: INFO: Container kindnet-cni ready: true, restart count 0 May 22 11:06:43.786: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 22 11:06:43.786: INFO: Container coredns ready: true, restart count 0 May 22 11:06:43.786: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 11:06:43.786: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-54788e50-9c1c-11ea-8e9c-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-54788e50-9c1c-11ea-8e9c-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-54788e50-9c1c-11ea-8e9c-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:06:52.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-kgs7d" for this suite. May 22 11:07:04.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:07:04.154: INFO: namespace: e2e-tests-sched-pred-kgs7d, resource: bindings, ignored listing per whitelist May 22 11:07:04.212: INFO: namespace e2e-tests-sched-pred-kgs7d deletion completed in 12.091073937s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:20.569 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:07:04.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 22 11:07:04.318: INFO: Waiting up to 5m0s for pod "downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-5n7vh" to be "success or failure" May 22 11:07:04.322: INFO: Pod "downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376284ms May 22 11:07:06.361: INFO: Pod "downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043831472s May 22 11:07:08.365: INFO: Pod "downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.047851464s May 22 11:07:10.370: INFO: Pod "downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052134698s STEP: Saw pod success May 22 11:07:10.370: INFO: Pod "downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:07:10.372: INFO: Trying to get logs from node hunter-worker pod downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 11:07:10.391: INFO: Waiting for pod downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:07:10.395: INFO: Pod downward-api-5e480404-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:07:10.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5n7vh" for this suite. May 22 11:07:18.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:07:18.503: INFO: namespace: e2e-tests-downward-api-5n7vh, resource: bindings, ignored listing per whitelist May 22 11:07:18.527: INFO: namespace e2e-tests-downward-api-5n7vh deletion completed in 8.128448138s • [SLOW TEST:14.315 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:07:18.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-66cec6b9-9c1c-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:07:18.740: INFO: Waiting up to 5m0s for pod "pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-5ljhg" to be "success or failure" May 22 11:07:18.743: INFO: Pod "pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.88373ms May 22 11:07:20.750: INFO: Pod "pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009942436s May 22 11:07:22.752: INFO: Pod "pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012797967s STEP: Saw pod success May 22 11:07:22.752: INFO: Pod "pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:07:22.754: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 11:07:22.780: INFO: Waiting for pod pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:07:22.809: INFO: Pod pod-secrets-66e091ad-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:07:22.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5ljhg" for this suite. May 22 11:07:28.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:07:28.896: INFO: namespace: e2e-tests-secrets-5ljhg, resource: bindings, ignored listing per whitelist May 22 11:07:28.912: INFO: namespace e2e-tests-secrets-5ljhg deletion completed in 6.100004576s STEP: Destroying namespace "e2e-tests-secret-namespace-qllfm" for this suite. May 22 11:07:34.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:07:35.175: INFO: namespace: e2e-tests-secret-namespace-qllfm, resource: bindings, ignored listing per whitelist May 22 11:07:35.197: INFO: namespace e2e-tests-secret-namespace-qllfm deletion completed in 6.28536256s • [SLOW TEST:16.671 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:07:35.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-710de9c0-9c1c-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:07:36.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-z2bxt" to be "success or failure" May 22 11:07:36.095: INFO: Pod "pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.60607ms May 22 11:07:38.151: INFO: Pod "pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08143059s May 22 11:07:40.154: INFO: Pod "pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084642352s May 22 11:07:42.158: INFO: Pod "pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088022341s STEP: Saw pod success May 22 11:07:42.158: INFO: Pod "pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:07:42.160: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 11:07:42.214: INFO: Waiting for pod pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:07:42.229: INFO: Pod pod-configmaps-7110657e-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:07:42.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z2bxt" for this suite. May 22 11:07:48.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:07:48.309: INFO: namespace: e2e-tests-configmap-z2bxt, resource: bindings, ignored listing per whitelist May 22 11:07:48.315: INFO: namespace e2e-tests-configmap-z2bxt deletion completed in 6.083491344s • [SLOW TEST:13.118 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:07:48.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-78c153d8-9c1c-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:07:48.728: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-f4g55" to be "success or failure" May 22 11:07:48.899: INFO: Pod "pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 171.011256ms May 22 11:07:50.904: INFO: Pod "pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175870962s May 22 11:07:52.908: INFO: Pod "pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18005722s STEP: Saw pod success May 22 11:07:52.908: INFO: Pod "pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:07:52.911: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 11:07:52.979: INFO: Waiting for pod pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:07:53.110: INFO: Pod pod-projected-configmaps-78c1cce3-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:07:53.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f4g55" for this suite. May 22 11:07:59.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:07:59.208: INFO: namespace: e2e-tests-projected-f4g55, resource: bindings, ignored listing per whitelist May 22 11:07:59.220: INFO: namespace e2e-tests-projected-f4g55 deletion completed in 6.107246772s • [SLOW TEST:10.904 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:07:59.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-zlt5s/secret-test-7f17a096-9c1c-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:07:59.408: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-zlt5s" to be "success or failure" May 22 11:07:59.419: INFO: Pod "pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.477277ms May 22 11:08:01.423: INFO: Pod "pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014326801s May 22 11:08:03.427: INFO: Pod "pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018654736s STEP: Saw pod success May 22 11:08:03.427: INFO: Pod "pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:08:03.430: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018 container env-test: STEP: delete the pod May 22 11:08:03.463: INFO: Waiting for pod pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:08:03.478: INFO: Pod pod-configmaps-7f1e0670-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:08:03.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zlt5s" for this suite. May 22 11:08:09.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:08:09.571: INFO: namespace: e2e-tests-secrets-zlt5s, resource: bindings, ignored listing per whitelist May 22 11:08:09.602: INFO: namespace e2e-tests-secrets-zlt5s deletion completed in 6.119930854s • [SLOW TEST:10.382 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:08:09.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:08:10.299: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:08:11.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-twz6n" for this suite. May 22 11:08:17.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:08:17.576: INFO: namespace: e2e-tests-custom-resource-definition-twz6n, resource: bindings, ignored listing per whitelist May 22 11:08:17.580: INFO: namespace e2e-tests-custom-resource-definition-twz6n deletion completed in 6.108655465s • [SLOW TEST:7.978 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:08:17.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-t6nl5 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-t6nl5 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-t6nl5 May 22 11:08:17.697: INFO: Found 0 stateful pods, waiting for 1 May 22 11:08:27.701: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 22 11:08:27.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 11:08:27.947: INFO: stderr: "I0522 11:08:27.834096 724 log.go:172] (0xc000138160) (0xc0006ac780) Create stream\nI0522 11:08:27.834159 724 log.go:172] (0xc000138160) (0xc0006ac780) Stream added, broadcasting: 1\nI0522 11:08:27.836378 724 log.go:172] (0xc000138160) Reply frame received for 1\nI0522 11:08:27.836430 724 log.go:172] (0xc000138160) (0xc000298d20) Create stream\nI0522 11:08:27.836451 724 log.go:172] (0xc000138160) (0xc000298d20) Stream added, broadcasting: 3\nI0522 11:08:27.837775 724 log.go:172] (0xc000138160) Reply frame received for 3\nI0522 11:08:27.837820 724 log.go:172] (0xc000138160) (0xc0006ac820) Create stream\nI0522 11:08:27.837837 724 log.go:172] (0xc000138160) (0xc0006ac820) Stream added, broadcasting: 5\nI0522 11:08:27.839002 724 log.go:172] (0xc000138160) Reply frame received for 5\nI0522 11:08:27.939420 724 log.go:172] (0xc000138160) Data frame received for 5\nI0522 11:08:27.939463 724 log.go:172] (0xc0006ac820) (5) Data frame handling\nI0522 11:08:27.939486 724 log.go:172] (0xc000138160) Data frame received for 3\nI0522 11:08:27.939504 724 log.go:172] (0xc000298d20) (3) Data frame handling\nI0522 11:08:27.939513 724 log.go:172] (0xc000298d20) (3) Data frame sent\nI0522 11:08:27.939524 724 log.go:172] (0xc000138160) Data frame received for 3\nI0522 11:08:27.939535 724 log.go:172] (0xc000298d20) (3) Data frame handling\nI0522 11:08:27.941854 724 log.go:172] (0xc000138160) Data frame received for 1\nI0522 11:08:27.941873 724 log.go:172] (0xc0006ac780) (1) Data frame handling\nI0522 11:08:27.941882 724 log.go:172] (0xc0006ac780) (1) Data frame sent\nI0522 11:08:27.941901 724 log.go:172] (0xc000138160) (0xc0006ac780) Stream removed, broadcasting: 1\nI0522 11:08:27.941933 724 log.go:172] (0xc000138160) Go away received\nI0522 11:08:27.942137 724 log.go:172] (0xc000138160) (0xc0006ac780) Stream removed, broadcasting: 1\nI0522 11:08:27.942229 724 log.go:172] (0xc000138160) (0xc000298d20) Stream removed, broadcasting: 3\nI0522 11:08:27.942258 724 log.go:172] (0xc000138160) (0xc0006ac820) Stream removed, broadcasting: 5\n" May 22 11:08:27.947: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 11:08:27.947: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 11:08:27.951: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 22 11:08:37.956: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 11:08:37.956: INFO: Waiting for statefulset status.replicas updated to 0 May 22 11:08:37.971: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999633s May 22 11:08:38.975: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995086181s May 22 11:08:39.979: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990779979s May 22 11:08:40.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986875884s May 22 11:08:41.990: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981676678s May 22 11:08:42.995: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976397822s May 22 11:08:44.000: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971451695s May 22 11:08:45.003: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966060488s May 22 11:08:46.009: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962613019s May 22 11:08:47.278: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.941915ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-t6nl5 May 22 11:08:48.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 11:08:48.484: INFO: stderr: "I0522 11:08:48.398619 747 log.go:172] (0xc000162840) (0xc00053b360) Create stream\nI0522 11:08:48.398673 747 log.go:172] (0xc000162840) (0xc00053b360) Stream added, broadcasting: 1\nI0522 11:08:48.407004 747 log.go:172] (0xc000162840) Reply frame received for 1\nI0522 11:08:48.407060 747 log.go:172] (0xc000162840) (0xc00053b400) Create stream\nI0522 11:08:48.407072 747 log.go:172] (0xc000162840) (0xc00053b400) Stream added, broadcasting: 3\nI0522 11:08:48.408829 747 log.go:172] (0xc000162840) Reply frame received for 3\nI0522 11:08:48.408918 747 log.go:172] (0xc000162840) (0xc00053b4a0) Create stream\nI0522 11:08:48.409002 747 log.go:172] (0xc000162840) (0xc00053b4a0) Stream added, broadcasting: 5\nI0522 11:08:48.410281 747 log.go:172] (0xc000162840) Reply frame received for 5\nI0522 11:08:48.478471 747 log.go:172] (0xc000162840) Data frame received for 5\nI0522 11:08:48.478516 747 log.go:172] (0xc00053b4a0) (5) Data frame handling\nI0522 11:08:48.478563 747 log.go:172] (0xc000162840) Data frame received for 3\nI0522 11:08:48.478583 747 log.go:172] (0xc00053b400) (3) Data frame handling\nI0522 11:08:48.478600 747 log.go:172] (0xc00053b400) (3) Data frame sent\nI0522 11:08:48.478618 747 log.go:172] (0xc000162840) Data frame received for 3\nI0522 11:08:48.478642 747 log.go:172] (0xc00053b400) (3) Data frame handling\nI0522 11:08:48.479888 747 log.go:172] (0xc000162840) Data frame received for 1\nI0522 11:08:48.479908 747 log.go:172] (0xc00053b360) (1) Data frame handling\nI0522 11:08:48.479916 747 log.go:172] (0xc00053b360) (1) Data frame sent\nI0522 11:08:48.479932 747 log.go:172] (0xc000162840) (0xc00053b360) Stream removed, broadcasting: 1\nI0522 11:08:48.480075 747 log.go:172] (0xc000162840) (0xc00053b360) Stream removed, broadcasting: 1\nI0522 11:08:48.480088 747 log.go:172] (0xc000162840) (0xc00053b400) Stream removed, broadcasting: 3\nI0522 11:08:48.480093 747 log.go:172] (0xc000162840) (0xc00053b4a0) Stream removed, broadcasting: 5\n" May 22 11:08:48.484: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 11:08:48.484: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 11:08:48.488: INFO: Found 1 stateful pods, waiting for 3 May 22 11:08:58.493: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 22 11:08:58.493: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 22 11:08:58.493: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false May 22 11:09:08.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 22 11:09:08.494: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 22 11:09:08.494: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 22 11:09:08.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 11:09:08.705: INFO: stderr: "I0522 11:09:08.635711 770 log.go:172] (0xc00015e580) (0xc000481400) Create stream\nI0522 11:09:08.635782 770 log.go:172] (0xc00015e580) (0xc000481400) Stream added, broadcasting: 1\nI0522 11:09:08.638453 770 log.go:172] (0xc00015e580) Reply frame received for 1\nI0522 11:09:08.638507 770 log.go:172] (0xc00015e580) (0xc00071e000) Create stream\nI0522 11:09:08.638524 770 log.go:172] (0xc00015e580) (0xc00071e000) Stream added, broadcasting: 3\nI0522 11:09:08.639445 770 log.go:172] (0xc00015e580) Reply frame received for 3\nI0522 11:09:08.639488 770 log.go:172] (0xc00015e580) (0xc00039a000) Create stream\nI0522 11:09:08.639502 770 log.go:172] (0xc00015e580) (0xc00039a000) Stream added, broadcasting: 5\nI0522 11:09:08.640326 770 log.go:172] (0xc00015e580) Reply frame received for 5\nI0522 11:09:08.697831 770 log.go:172] (0xc00015e580) Data frame received for 3\nI0522 11:09:08.697859 770 log.go:172] (0xc00071e000) (3) Data frame handling\nI0522 11:09:08.697866 770 log.go:172] (0xc00071e000) (3) Data frame sent\nI0522 11:09:08.697871 770 log.go:172] (0xc00015e580) Data frame received for 3\nI0522 11:09:08.697875 770 log.go:172] (0xc00071e000) (3) Data frame handling\nI0522 11:09:08.697897 770 log.go:172] (0xc00015e580) Data frame received for 5\nI0522 11:09:08.697902 770 log.go:172] (0xc00039a000) (5) Data frame handling\nI0522 11:09:08.699737 770 log.go:172] (0xc00015e580) Data frame received for 1\nI0522 11:09:08.699770 770 log.go:172] (0xc000481400) (1) Data frame handling\nI0522 11:09:08.699796 770 log.go:172] (0xc000481400) (1) Data frame sent\nI0522 11:09:08.699815 770 log.go:172] (0xc00015e580) (0xc000481400) Stream removed, broadcasting: 1\nI0522 11:09:08.699829 770 log.go:172] (0xc00015e580) Go away received\nI0522 11:09:08.700130 770 log.go:172] (0xc00015e580) (0xc000481400) Stream removed, broadcasting: 1\nI0522 11:09:08.700155 770 log.go:172] (0xc00015e580) (0xc00071e000) Stream removed, broadcasting: 3\nI0522 11:09:08.700169 770 log.go:172] (0xc00015e580) (0xc00039a000) Stream removed, broadcasting: 5\n" May 22 11:09:08.706: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 11:09:08.706: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 11:09:08.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 11:09:08.938: INFO: stderr: "I0522 11:09:08.830721 792 log.go:172] (0xc000138630) (0xc000728640) Create stream\nI0522 11:09:08.830801 792 log.go:172] (0xc000138630) (0xc000728640) Stream added, broadcasting: 1\nI0522 11:09:08.833516 792 log.go:172] (0xc000138630) Reply frame received for 1\nI0522 11:09:08.833568 792 log.go:172] (0xc000138630) (0xc0007286e0) Create stream\nI0522 11:09:08.833587 792 log.go:172] (0xc000138630) (0xc0007286e0) Stream added, broadcasting: 3\nI0522 11:09:08.834557 792 log.go:172] (0xc000138630) Reply frame received for 3\nI0522 11:09:08.834606 792 log.go:172] (0xc000138630) (0xc00029abe0) Create stream\nI0522 11:09:08.834633 792 log.go:172] (0xc000138630) (0xc00029abe0) Stream added, broadcasting: 5\nI0522 11:09:08.835526 792 log.go:172] (0xc000138630) Reply frame received for 5\nI0522 11:09:08.931725 792 log.go:172] (0xc000138630) Data frame received for 3\nI0522 11:09:08.931761 792 log.go:172] (0xc0007286e0) (3) Data frame handling\nI0522 11:09:08.931797 792 log.go:172] (0xc0007286e0) (3) Data frame sent\nI0522 11:09:08.931810 792 log.go:172] (0xc000138630) Data frame received for 3\nI0522 11:09:08.931821 792 log.go:172] (0xc0007286e0) (3) Data frame handling\nI0522 11:09:08.932217 792 log.go:172] (0xc000138630) Data frame received for 5\nI0522 11:09:08.932245 792 log.go:172] (0xc00029abe0) (5) Data frame handling\nI0522 11:09:08.933947 792 log.go:172] (0xc000138630) Data frame received for 1\nI0522 11:09:08.933968 792 log.go:172] (0xc000728640) (1) Data frame handling\nI0522 11:09:08.933976 792 log.go:172] (0xc000728640) (1) Data frame sent\nI0522 11:09:08.933986 792 log.go:172] (0xc000138630) (0xc000728640) Stream removed, broadcasting: 1\nI0522 11:09:08.934075 792 log.go:172] (0xc000138630) Go away received\nI0522 11:09:08.934193 792 log.go:172] (0xc000138630) (0xc000728640) Stream removed, broadcasting: 1\nI0522 11:09:08.934209 792 log.go:172] (0xc000138630) (0xc0007286e0) Stream removed, broadcasting: 3\nI0522 11:09:08.934217 792 log.go:172] (0xc000138630) (0xc00029abe0) Stream removed, broadcasting: 5\n" May 22 11:09:08.938: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 11:09:08.938: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 11:09:08.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 11:09:09.182: INFO: stderr: "I0522 11:09:09.072429 814 log.go:172] (0xc000172840) (0xc0006d0640) Create stream\nI0522 11:09:09.072484 814 log.go:172] (0xc000172840) (0xc0006d0640) Stream added, broadcasting: 1\nI0522 11:09:09.075089 814 log.go:172] (0xc000172840) Reply frame received for 1\nI0522 11:09:09.075138 814 log.go:172] (0xc000172840) (0xc00074ec80) Create stream\nI0522 11:09:09.075155 814 log.go:172] (0xc000172840) (0xc00074ec80) Stream added, broadcasting: 3\nI0522 11:09:09.076187 814 log.go:172] (0xc000172840) Reply frame received for 3\nI0522 11:09:09.076232 814 log.go:172] (0xc000172840) (0xc0006d06e0) Create stream\nI0522 11:09:09.076245 814 log.go:172] (0xc000172840) (0xc0006d06e0) Stream added, broadcasting: 5\nI0522 11:09:09.077639 814 log.go:172] (0xc000172840) Reply frame received for 5\nI0522 11:09:09.175216 814 log.go:172] (0xc000172840) Data frame received for 3\nI0522 11:09:09.175250 814 log.go:172] (0xc00074ec80) (3) Data frame handling\nI0522 11:09:09.175261 814 log.go:172] (0xc00074ec80) (3) Data frame sent\nI0522 11:09:09.175268 814 log.go:172] (0xc000172840) Data frame received for 3\nI0522 11:09:09.175274 814 log.go:172] (0xc00074ec80) (3) Data frame handling\nI0522 11:09:09.175303 814 log.go:172] (0xc000172840) Data frame received for 5\nI0522 11:09:09.175310 814 log.go:172] (0xc0006d06e0) (5) Data frame handling\nI0522 11:09:09.177647 814 log.go:172] (0xc000172840) Data frame received for 1\nI0522 11:09:09.177674 814 log.go:172] (0xc0006d0640) (1) Data frame handling\nI0522 11:09:09.177689 814 log.go:172] (0xc0006d0640) (1) Data frame sent\nI0522 11:09:09.177718 814 log.go:172] (0xc000172840) (0xc0006d0640) Stream removed, broadcasting: 1\nI0522 11:09:09.177737 814 log.go:172] (0xc000172840) Go away received\nI0522 11:09:09.178039 814 log.go:172] (0xc000172840) (0xc0006d0640) Stream removed, broadcasting: 1\nI0522 11:09:09.178073 814 log.go:172] (0xc000172840) (0xc00074ec80) Stream removed, broadcasting: 3\nI0522 11:09:09.178105 814 log.go:172] (0xc000172840) (0xc0006d06e0) Stream removed, broadcasting: 5\n" May 22 11:09:09.182: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 11:09:09.182: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 11:09:09.182: INFO: Waiting for statefulset status.replicas updated to 0 May 22 11:09:09.185: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 22 11:09:19.194: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 11:09:19.194: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 22 11:09:19.194: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 22 11:09:19.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999575s May 22 11:09:20.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986307669s May 22 11:09:21.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980452815s May 22 11:09:22.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975506166s May 22 11:09:23.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969863494s May 22 11:09:24.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.964939932s May 22 11:09:25.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.959220905s May 22 11:09:26.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.954260192s May 22 11:09:27.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.94807534s May 22 11:09:28.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 942.760127ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-t6nl5 May 22 11:09:29.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 11:09:29.481: INFO: stderr: "I0522 11:09:29.391145 836 log.go:172] (0xc0001386e0) (0xc0006794a0) Create stream\nI0522 11:09:29.391197 836 log.go:172] (0xc0001386e0) (0xc0006794a0) Stream added, broadcasting: 1\nI0522 11:09:29.393892 836 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0522 11:09:29.393921 836 log.go:172] (0xc0001386e0) (0xc000679540) Create stream\nI0522 11:09:29.393930 836 log.go:172] (0xc0001386e0) (0xc000679540) Stream added, broadcasting: 3\nI0522 11:09:29.394918 836 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0522 11:09:29.394957 836 log.go:172] (0xc0001386e0) (0xc00031a8c0) Create stream\nI0522 11:09:29.394976 836 log.go:172] (0xc0001386e0) (0xc00031a8c0) Stream added, broadcasting: 5\nI0522 11:09:29.395999 836 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0522 11:09:29.475783 836 log.go:172] (0xc0001386e0) Data frame received for 3\nI0522 11:09:29.475823 836 log.go:172] (0xc000679540) (3) Data frame handling\nI0522 11:09:29.475852 836 log.go:172] (0xc0001386e0) Data frame received for 5\nI0522 11:09:29.475881 836 log.go:172] (0xc00031a8c0) (5) Data frame handling\nI0522 11:09:29.475904 836 log.go:172] (0xc000679540) (3) Data frame sent\nI0522 11:09:29.475915 836 log.go:172] (0xc0001386e0) Data frame received for 3\nI0522 11:09:29.475923 836 log.go:172] (0xc000679540) (3) Data frame handling\nI0522 11:09:29.477601 836 log.go:172] (0xc0001386e0) Data frame received for 1\nI0522 11:09:29.477630 836 log.go:172] (0xc0006794a0) (1) Data frame handling\nI0522 11:09:29.477655 836 log.go:172] (0xc0006794a0) (1) Data frame sent\nI0522 11:09:29.477810 836 log.go:172] (0xc0001386e0) (0xc0006794a0) Stream removed, broadcasting: 1\nI0522 11:09:29.477870 836 log.go:172] (0xc0001386e0) Go away received\nI0522 11:09:29.478002 836 log.go:172] (0xc0001386e0) (0xc0006794a0) Stream removed, broadcasting: 1\nI0522 11:09:29.478018 836 log.go:172] (0xc0001386e0) (0xc000679540) Stream removed, broadcasting: 3\nI0522 11:09:29.478028 836 log.go:172] (0xc0001386e0) (0xc00031a8c0) Stream removed, broadcasting: 5\n" May 22 11:09:29.482: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 11:09:29.482: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 11:09:29.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 11:09:29.696: INFO: stderr: "I0522 11:09:29.620613 859 log.go:172] (0xc00082a2c0) (0xc0006fc640) Create stream\nI0522 11:09:29.620690 859 log.go:172] (0xc00082a2c0) (0xc0006fc640) Stream added, broadcasting: 1\nI0522 11:09:29.623822 859 log.go:172] (0xc00082a2c0) Reply frame received for 1\nI0522 11:09:29.623859 859 log.go:172] (0xc00082a2c0) (0xc0006fc6e0) Create stream\nI0522 11:09:29.623872 859 log.go:172] (0xc00082a2c0) (0xc0006fc6e0) Stream added, broadcasting: 3\nI0522 11:09:29.624794 859 log.go:172] (0xc00082a2c0) Reply frame received for 3\nI0522 11:09:29.624827 859 log.go:172] (0xc00082a2c0) (0xc0006fc780) Create stream\nI0522 11:09:29.624837 859 log.go:172] (0xc00082a2c0) (0xc0006fc780) Stream added, broadcasting: 5\nI0522 11:09:29.625960 859 log.go:172] (0xc00082a2c0) Reply frame received for 5\nI0522 11:09:29.689695 859 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0522 11:09:29.689722 859 log.go:172] (0xc0006fc780) (5) Data frame handling\nI0522 11:09:29.689757 859 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0522 11:09:29.689800 859 log.go:172] (0xc0006fc6e0) (3) Data frame handling\nI0522 11:09:29.689830 859 log.go:172] (0xc0006fc6e0) (3) Data frame sent\nI0522 11:09:29.689862 859 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0522 11:09:29.689876 859 log.go:172] (0xc0006fc6e0) (3) Data frame handling\nI0522 11:09:29.691615 859 log.go:172] (0xc00082a2c0) Data frame received for 1\nI0522 11:09:29.691748 859 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0522 11:09:29.691892 859 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0522 11:09:29.691929 859 log.go:172] (0xc00082a2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0522 11:09:29.691954 859 log.go:172] (0xc00082a2c0) Go away received\nI0522 11:09:29.692427 859 log.go:172] (0xc00082a2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0522 11:09:29.692456 859 log.go:172] (0xc00082a2c0) (0xc0006fc6e0) Stream removed, broadcasting: 3\nI0522 11:09:29.692472 859 log.go:172] (0xc00082a2c0) (0xc0006fc780) Stream removed, broadcasting: 5\n" May 22 11:09:29.696: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 11:09:29.696: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 11:09:29.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6nl5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 11:09:29.902: INFO: stderr: "I0522 11:09:29.813033 882 log.go:172] (0xc0001386e0) (0xc000752640) Create stream\nI0522 11:09:29.813077 882 log.go:172] (0xc0001386e0) (0xc000752640) Stream added, broadcasting: 1\nI0522 11:09:29.815514 882 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0522 11:09:29.815552 882 log.go:172] (0xc0001386e0) (0xc0005f6c80) Create stream\nI0522 11:09:29.815562 882 log.go:172] (0xc0001386e0) (0xc0005f6c80) Stream added, broadcasting: 3\nI0522 11:09:29.816406 882 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0522 11:09:29.816439 882 log.go:172] (0xc0001386e0) (0xc000688000) Create stream\nI0522 11:09:29.816450 882 log.go:172] (0xc0001386e0) (0xc000688000) Stream added, broadcasting: 5\nI0522 11:09:29.817517 882 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0522 11:09:29.898049 882 log.go:172] (0xc0001386e0) Data frame received for 3\nI0522 11:09:29.898082 882 log.go:172] (0xc0005f6c80) (3) Data frame handling\nI0522 11:09:29.898100 882 log.go:172] (0xc0005f6c80) (3) Data frame sent\nI0522 11:09:29.898277 882 log.go:172] (0xc0001386e0) Data frame received for 3\nI0522 11:09:29.898306 882 log.go:172] (0xc0005f6c80) (3) Data frame handling\nI0522 11:09:29.898333 882 log.go:172] (0xc0001386e0) Data frame received for 5\nI0522 11:09:29.898345 882 log.go:172] (0xc000688000) (5) Data frame handling\nI0522 11:09:29.899685 882 log.go:172] (0xc0001386e0) Data frame received for 1\nI0522 11:09:29.899700 882 log.go:172] (0xc000752640) (1) Data frame handling\nI0522 11:09:29.899716 882 log.go:172] (0xc000752640) (1) Data frame sent\nI0522 11:09:29.899730 882 log.go:172] (0xc0001386e0) (0xc000752640) Stream removed, broadcasting: 1\nI0522 11:09:29.899785 882 log.go:172] (0xc0001386e0) Go away received\nI0522 11:09:29.899974 882 log.go:172] (0xc0001386e0) (0xc000752640) Stream removed, broadcasting: 1\nI0522 11:09:29.899990 882 log.go:172] (0xc0001386e0) (0xc0005f6c80) Stream removed, broadcasting: 3\nI0522 11:09:29.899996 882 log.go:172] (0xc0001386e0) (0xc000688000) Stream removed, broadcasting: 5\n" May 22 11:09:29.903: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 11:09:29.903: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 11:09:29.903: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 22 11:10:09.956: INFO: Deleting all statefulset in ns e2e-tests-statefulset-t6nl5 May 22 11:10:09.959: INFO: Scaling statefulset ss to 0 May 22 11:10:09.968: INFO: Waiting for statefulset status.replicas updated to 0 May 22 11:10:09.970: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:10:10.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-t6nl5" for this suite. May 22 11:10:16.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:10:16.413: INFO: namespace: e2e-tests-statefulset-t6nl5, resource: bindings, ignored listing per whitelist May 22 11:10:16.413: INFO: namespace e2e-tests-statefulset-t6nl5 deletion completed in 6.242015674s • [SLOW TEST:118.833 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:10:16.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 22 11:10:16.509: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 22 11:10:16.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:16.812: INFO: stderr: "" May 22 11:10:16.812: INFO: stdout: "service/redis-slave created\n" May 22 11:10:16.812: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 22 11:10:16.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:17.193: INFO: stderr: "" May 22 11:10:17.193: INFO: stdout: "service/redis-master created\n" May 22 11:10:17.193: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 22 11:10:17.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:17.497: INFO: stderr: "" May 22 11:10:17.497: INFO: stdout: "service/frontend created\n" May 22 11:10:17.497: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 22 11:10:17.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:17.750: INFO: stderr: "" May 22 11:10:17.750: INFO: stdout: "deployment.extensions/frontend created\n" May 22 11:10:17.750: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 22 11:10:17.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:18.102: INFO: stderr: "" May 22 11:10:18.102: INFO: stdout: "deployment.extensions/redis-master created\n" May 22 11:10:18.102: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 22 11:10:18.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:18.367: INFO: stderr: "" May 22 11:10:18.367: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 22 11:10:18.367: INFO: Waiting for all frontend pods to be Running. May 22 11:10:28.418: INFO: Waiting for frontend to serve content. May 22 11:10:29.452: INFO: Trying to add a new entry to the guestbook. May 22 11:10:30.223: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 22 11:10:30.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:31.200: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:10:31.200: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 22 11:10:31.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:32.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:10:32.597: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 22 11:10:32.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:32.873: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:10:32.873: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 22 11:10:32.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:32.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:10:32.981: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 22 11:10:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:33.175: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:10:33.175: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 22 11:10:33.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mdkmt' May 22 11:10:34.016: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:10:34.016: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:10:34.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mdkmt" for this suite. May 22 11:11:16.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:11:16.107: INFO: namespace: e2e-tests-kubectl-mdkmt, resource: bindings, ignored listing per whitelist May 22 11:11:16.134: INFO: namespace e2e-tests-kubectl-mdkmt deletion completed in 42.087351576s • [SLOW TEST:59.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:11:16.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 1 pods STEP: Gathering metrics W0522 11:11:19.095614 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 11:11:19.095: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:11:19.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-cr58n" for this suite. May 22 11:11:25.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:11:25.398: INFO: namespace: e2e-tests-gc-cr58n, resource: bindings, ignored listing per whitelist May 22 11:11:25.400: INFO: namespace e2e-tests-gc-cr58n deletion completed in 6.300408382s • [SLOW TEST:9.266 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:11:25.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f9fea3f1-9c1c-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:11:25.557: INFO: Waiting up to 5m0s for pod "pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-j7cc7" to be "success or failure" May 22 11:11:25.576: INFO: Pod "pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.856449ms May 22 11:11:27.739: INFO: Pod "pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181612941s May 22 11:11:29.743: INFO: Pod "pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18546757s May 22 11:11:31.747: INFO: Pod "pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189326966s STEP: Saw pod success May 22 11:11:31.747: INFO: Pod "pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:11:31.749: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 11:11:31.897: INFO: Waiting for pod pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018 to disappear May 22 11:11:31.899: INFO: Pod pod-configmaps-f9ff50fd-9c1c-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:11:31.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j7cc7" for this suite. May 22 11:11:38.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:11:38.064: INFO: namespace: e2e-tests-configmap-j7cc7, resource: bindings, ignored listing per whitelist May 22 11:11:38.089: INFO: namespace e2e-tests-configmap-j7cc7 deletion completed in 6.186695554s • [SLOW TEST:12.689 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:11:38.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-019a6e77-9c1d-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-019a6e77-9c1d-11ea-8e9c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:12:52.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vj6w6" for this suite. May 22 11:13:16.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:13:19.317: INFO: namespace: e2e-tests-configmap-vj6w6, resource: bindings, ignored listing per whitelist May 22 11:13:19.467: INFO: namespace e2e-tests-configmap-vj6w6 deletion completed in 26.48810134s • [SLOW TEST:101.378 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:13:19.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3e6e31ed-9c1d-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:13:20.745: INFO: Waiting up to 5m0s for pod "pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-tmmwz" to be "success or failure" May 22 11:13:20.820: INFO: Pod "pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 75.105923ms May 22 11:13:22.824: INFO: Pod "pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079460453s May 22 11:13:24.828: INFO: Pod "pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083486769s May 22 11:13:26.833: INFO: Pod "pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088128044s STEP: Saw pod success May 22 11:13:26.833: INFO: Pod "pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:13:26.836: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 11:13:26.898: INFO: Waiting for pod pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018 to disappear May 22 11:13:26.916: INFO: Pod pod-secrets-3e78cbf0-9c1d-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:13:26.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tmmwz" for this suite. May 22 11:13:33.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:13:33.132: INFO: namespace: e2e-tests-secrets-tmmwz, resource: bindings, ignored listing per whitelist May 22 11:13:33.136: INFO: namespace e2e-tests-secrets-tmmwz deletion completed in 6.215475131s • [SLOW TEST:13.668 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:13:33.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4619c419-9c1d-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:13:33.243: INFO: Waiting up to 5m0s for pod "pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-jxhpc" to be "success or failure" May 22 11:13:33.254: INFO: Pod "pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.567128ms May 22 11:13:35.450: INFO: Pod "pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207062464s May 22 11:13:37.453: INFO: Pod "pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210083867s May 22 11:13:39.457: INFO: Pod "pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.214018781s STEP: Saw pod success May 22 11:13:39.457: INFO: Pod "pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:13:39.460: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 11:13:39.497: INFO: Waiting for pod pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018 to disappear May 22 11:13:39.502: INFO: Pod pod-secrets-461a62ce-9c1d-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:13:39.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jxhpc" for this suite. May 22 11:13:45.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:13:45.660: INFO: namespace: e2e-tests-secrets-jxhpc, resource: bindings, ignored listing per whitelist May 22 11:13:45.831: INFO: namespace e2e-tests-secrets-jxhpc deletion completed in 6.325267607s • [SLOW TEST:12.694 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:13:45.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-4dd86883-9c1d-11ea-8e9c-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-4dd868e2-9c1d-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4dd86883-9c1d-11ea-8e9c-0242ac110018 STEP: Updating configmap cm-test-opt-upd-4dd868e2-9c1d-11ea-8e9c-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-4dd86905-9c1d-11ea-8e9c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:13:58.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8hjl2" for this suite. May 22 11:14:22.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:14:22.951: INFO: namespace: e2e-tests-projected-8hjl2, resource: bindings, ignored listing per whitelist May 22 11:14:22.977: INFO: namespace e2e-tests-projected-8hjl2 deletion completed in 24.281909093s • [SLOW TEST:37.146 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:14:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 22 11:14:23.278: INFO: Waiting up to 5m0s for pod "pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-dmkmg" to be "success or failure" May 22 11:14:23.288: INFO: Pod "pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.540114ms May 22 11:14:25.292: INFO: Pod "pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014125663s May 22 11:14:27.296: INFO: Pod "pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017673689s STEP: Saw pod success May 22 11:14:27.296: INFO: Pod "pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:14:27.299: INFO: Trying to get logs from node hunter-worker2 pod pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:14:27.470: INFO: Waiting for pod pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018 to disappear May 22 11:14:27.624: INFO: Pod pod-63eda5f1-9c1d-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:14:27.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dmkmg" for this suite. May 22 11:14:33.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:14:33.781: INFO: namespace: e2e-tests-emptydir-dmkmg, resource: bindings, ignored listing per whitelist May 22 11:14:33.817: INFO: namespace e2e-tests-emptydir-dmkmg deletion completed in 6.18924218s • [SLOW TEST:10.840 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:14:33.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:14:33.930: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.095763ms) May 22 11:14:33.934: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.418161ms) May 22 11:14:33.937: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.137968ms) May 22 11:14:33.940: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.160027ms) May 22 11:14:33.943: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.886993ms) May 22 11:14:33.945: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.442314ms) May 22 11:14:33.948: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.803923ms) May 22 11:14:33.996: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 47.693418ms) May 22 11:14:34.008: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 11.981486ms) May 22 11:14:34.011: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.853416ms) May 22 11:14:34.013: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.596303ms) May 22 11:14:34.016: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.377503ms) May 22 11:14:34.019: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.007683ms) May 22 11:14:34.021: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.299888ms) May 22 11:14:34.023: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.122338ms) May 22 11:14:34.026: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.424479ms) May 22 11:14:34.028: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.405426ms) May 22 11:14:34.031: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.585332ms) May 22 11:14:34.033: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.260219ms) May 22 11:14:34.035: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.128588ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:14:34.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-k2cxc" for this suite. May 22 11:14:40.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:14:40.145: INFO: namespace: e2e-tests-proxy-k2cxc, resource: bindings, ignored listing per whitelist May 22 11:14:40.160: INFO: namespace e2e-tests-proxy-k2cxc deletion completed in 6.12239661s • [SLOW TEST:6.342 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:14:40.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 22 11:14:40.411: INFO: Waiting up to 5m0s for pod "pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-644qk" to be "success or failure" May 22 11:14:40.468: INFO: Pod "pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.220156ms May 22 11:14:42.769: INFO: Pod "pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358136909s May 22 11:14:44.774: INFO: Pod "pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362512636s May 22 11:14:46.778: INFO: Pod "pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.366545236s STEP: Saw pod success May 22 11:14:46.778: INFO: Pod "pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:14:46.780: INFO: Trying to get logs from node hunter-worker pod pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:14:46.809: INFO: Waiting for pod pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018 to disappear May 22 11:14:46.846: INFO: Pod pod-6e1b94d9-9c1d-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:14:46.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-644qk" for this suite. May 22 11:14:52.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:14:52.928: INFO: namespace: e2e-tests-emptydir-644qk, resource: bindings, ignored listing per whitelist May 22 11:14:52.951: INFO: namespace e2e-tests-emptydir-644qk deletion completed in 6.101868275s • [SLOW TEST:12.791 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:14:52.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-75b06222-9c1d-11ea-8e9c-0242ac110018 STEP: Creating secret with name s-test-opt-upd-75b0628f-9c1d-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-75b06222-9c1d-11ea-8e9c-0242ac110018 STEP: Updating secret s-test-opt-upd-75b0628f-9c1d-11ea-8e9c-0242ac110018 STEP: Creating secret with name s-test-opt-create-75b062bc-9c1d-11ea-8e9c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:15:03.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-v5c4d" for this suite. May 22 11:15:27.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:15:27.325: INFO: namespace: e2e-tests-secrets-v5c4d, resource: bindings, ignored listing per whitelist May 22 11:15:27.392: INFO: namespace e2e-tests-secrets-v5c4d deletion completed in 24.1614042s • [SLOW TEST:34.441 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:15:27.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:15:27.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-d6xv9" to be "success or failure" May 22 11:15:27.499: INFO: Pod "downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.965158ms May 22 11:15:29.504: INFO: Pod "downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020770497s May 22 11:15:31.509: INFO: Pod "downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025784834s STEP: Saw pod success May 22 11:15:31.509: INFO: Pod "downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:15:31.511: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:15:31.531: INFO: Waiting for pod downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018 to disappear May 22 11:15:31.535: INFO: Pod downwardapi-volume-8a31bcf1-9c1d-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:15:31.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d6xv9" for this suite. May 22 11:15:37.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:15:37.578: INFO: namespace: e2e-tests-downward-api-d6xv9, resource: bindings, ignored listing per whitelist May 22 11:15:37.611: INFO: namespace e2e-tests-downward-api-d6xv9 deletion completed in 6.072993738s • [SLOW TEST:10.219 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:15:37.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-p86v2 May 22 11:15:41.753: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-p86v2 STEP: checking the pod's current state and verifying that restartCount is present May 22 11:15:41.756: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:19:43.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-p86v2" for this suite. May 22 11:19:49.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:19:49.107: INFO: namespace: e2e-tests-container-probe-p86v2, resource: bindings, ignored listing per whitelist May 22 11:19:49.162: INFO: namespace e2e-tests-container-probe-p86v2 deletion completed in 6.089096534s • [SLOW TEST:251.551 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:19:49.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-flqp4 STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 11:19:49.259: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 11:20:15.437: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.214 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-flqp4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 11:20:15.437: INFO: >>> kubeConfig: /root/.kube/config I0522 11:20:15.470671 6 log.go:172] (0xc000a48a50) (0xc001a817c0) Create stream I0522 11:20:15.470712 6 log.go:172] (0xc000a48a50) (0xc001a817c0) Stream added, broadcasting: 1 I0522 11:20:15.473392 6 log.go:172] (0xc000a48a50) Reply frame received for 1 I0522 11:20:15.473433 6 log.go:172] (0xc000a48a50) (0xc001834d20) Create stream I0522 11:20:15.473450 6 log.go:172] (0xc000a48a50) (0xc001834d20) Stream added, broadcasting: 3 I0522 11:20:15.474446 6 log.go:172] (0xc000a48a50) Reply frame received for 3 I0522 11:20:15.474486 6 log.go:172] (0xc000a48a50) (0xc000322aa0) Create stream I0522 11:20:15.474504 6 log.go:172] (0xc000a48a50) (0xc000322aa0) Stream added, broadcasting: 5 I0522 11:20:15.475572 6 log.go:172] (0xc000a48a50) Reply frame received for 5 I0522 11:20:16.561916 6 log.go:172] (0xc000a48a50) Data frame received for 3 I0522 11:20:16.561962 6 log.go:172] (0xc001834d20) (3) Data frame handling I0522 11:20:16.561995 6 log.go:172] (0xc001834d20) (3) Data frame sent I0522 11:20:16.563229 6 log.go:172] (0xc000a48a50) Data frame received for 3 I0522 11:20:16.563276 6 log.go:172] (0xc001834d20) (3) Data frame handling I0522 11:20:16.563312 6 log.go:172] (0xc000a48a50) Data frame received for 5 I0522 11:20:16.563337 6 log.go:172] (0xc000322aa0) (5) Data frame handling I0522 11:20:16.564641 6 log.go:172] (0xc000a48a50) Data frame received for 1 I0522 11:20:16.564679 6 log.go:172] (0xc001a817c0) (1) Data frame handling I0522 11:20:16.564697 6 log.go:172] (0xc001a817c0) (1) Data frame sent I0522 11:20:16.564712 6 log.go:172] (0xc000a48a50) (0xc001a817c0) Stream removed, broadcasting: 1 I0522 11:20:16.564810 6 log.go:172] (0xc000a48a50) (0xc001a817c0) Stream removed, broadcasting: 1 I0522 11:20:16.564849 6 log.go:172] (0xc000a48a50) Go away received I0522 11:20:16.564934 6 log.go:172] (0xc000a48a50) (0xc001834d20) Stream removed, broadcasting: 3 I0522 11:20:16.564991 6 log.go:172] (0xc000a48a50) (0xc000322aa0) Stream removed, broadcasting: 5 May 22 11:20:16.565: INFO: Found all expected endpoints: [netserver-0] May 22 11:20:16.569: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.185 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-flqp4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 11:20:16.569: INFO: >>> kubeConfig: /root/.kube/config I0522 11:20:16.603475 6 log.go:172] (0xc001af0580) (0xc001834fa0) Create stream I0522 11:20:16.603543 6 log.go:172] (0xc001af0580) (0xc001834fa0) Stream added, broadcasting: 1 I0522 11:20:16.611604 6 log.go:172] (0xc001af0580) Reply frame received for 1 I0522 11:20:16.611687 6 log.go:172] (0xc001af0580) (0xc001835040) Create stream I0522 11:20:16.611715 6 log.go:172] (0xc001af0580) (0xc001835040) Stream added, broadcasting: 3 I0522 11:20:16.613354 6 log.go:172] (0xc001af0580) Reply frame received for 3 I0522 11:20:16.613384 6 log.go:172] (0xc001af0580) (0xc000322d20) Create stream I0522 11:20:16.613408 6 log.go:172] (0xc001af0580) (0xc000322d20) Stream added, broadcasting: 5 I0522 11:20:16.615293 6 log.go:172] (0xc001af0580) Reply frame received for 5 I0522 11:20:17.702335 6 log.go:172] (0xc001af0580) Data frame received for 3 I0522 11:20:17.702422 6 log.go:172] (0xc001835040) (3) Data frame handling I0522 11:20:17.702463 6 log.go:172] (0xc001835040) (3) Data frame sent I0522 11:20:17.702534 6 log.go:172] (0xc001af0580) Data frame received for 3 I0522 11:20:17.702563 6 log.go:172] (0xc001835040) (3) Data frame handling I0522 11:20:17.703640 6 log.go:172] (0xc001af0580) Data frame received for 5 I0522 11:20:17.703662 6 log.go:172] (0xc000322d20) (5) Data frame handling I0522 11:20:17.704682 6 log.go:172] (0xc001af0580) Data frame received for 1 I0522 11:20:17.704726 6 log.go:172] (0xc001834fa0) (1) Data frame handling I0522 11:20:17.704761 6 log.go:172] (0xc001834fa0) (1) Data frame sent I0522 11:20:17.704788 6 log.go:172] (0xc001af0580) (0xc001834fa0) Stream removed, broadcasting: 1 I0522 11:20:17.704830 6 log.go:172] (0xc001af0580) Go away received I0522 11:20:17.704943 6 log.go:172] (0xc001af0580) (0xc001834fa0) Stream removed, broadcasting: 1 I0522 11:20:17.704969 6 log.go:172] (0xc001af0580) (0xc001835040) Stream removed, broadcasting: 3 I0522 11:20:17.704987 6 log.go:172] (0xc001af0580) (0xc000322d20) Stream removed, broadcasting: 5 May 22 11:20:17.705: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:20:17.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-flqp4" for this suite. May 22 11:20:41.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:20:41.779: INFO: namespace: e2e-tests-pod-network-test-flqp4, resource: bindings, ignored listing per whitelist May 22 11:20:41.795: INFO: namespace e2e-tests-pod-network-test-flqp4 deletion completed in 24.08619458s • [SLOW TEST:52.633 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:20:41.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-45ad5078-9c1e-11ea-8e9c-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-45ad50cb-9c1e-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-45ad5078-9c1e-11ea-8e9c-0242ac110018 STEP: Updating configmap cm-test-opt-upd-45ad50cb-9c1e-11ea-8e9c-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-45ad50e2-9c1e-11ea-8e9c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:20:52.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pjzmk" for this suite. May 22 11:21:16.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:21:16.179: INFO: namespace: e2e-tests-configmap-pjzmk, resource: bindings, ignored listing per whitelist May 22 11:21:16.271: INFO: namespace e2e-tests-configmap-pjzmk deletion completed in 24.129503441s • [SLOW TEST:34.475 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:21:16.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 22 11:21:16.478: INFO: Waiting up to 5m0s for pod "pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-j5gxm" to be "success or failure" May 22 11:21:16.489: INFO: Pod "pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.496208ms May 22 11:21:18.514: INFO: Pod "pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036292292s May 22 11:21:20.517: INFO: Pod "pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039505368s STEP: Saw pod success May 22 11:21:20.517: INFO: Pod "pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:21:20.519: INFO: Trying to get logs from node hunter-worker pod pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:21:20.537: INFO: Waiting for pod pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:21:20.542: INFO: Pod pod-5a2e4f36-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:21:20.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j5gxm" for this suite. May 22 11:21:26.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:21:26.669: INFO: namespace: e2e-tests-emptydir-j5gxm, resource: bindings, ignored listing per whitelist May 22 11:21:26.674: INFO: namespace e2e-tests-emptydir-j5gxm deletion completed in 6.109126153s • [SLOW TEST:10.403 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:21:26.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6059072f-9c1e-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:21:26.808: INFO: Waiting up to 5m0s for pod "pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-tf528" to be "success or failure" May 22 11:21:26.812: INFO: Pod "pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241209ms May 22 11:21:28.830: INFO: Pod "pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021174303s May 22 11:21:30.834: INFO: Pod "pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.025460467s May 22 11:21:32.838: INFO: Pod "pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029987445s STEP: Saw pod success May 22 11:21:32.838: INFO: Pod "pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:21:32.842: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 11:21:32.878: INFO: Waiting for pod pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:21:32.889: INFO: Pod pod-secrets-605f6bd8-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:21:32.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tf528" for this suite. May 22 11:21:38.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:21:38.923: INFO: namespace: e2e-tests-secrets-tf528, resource: bindings, ignored listing per whitelist May 22 11:21:38.983: INFO: namespace e2e-tests-secrets-tf528 deletion completed in 6.090121426s • [SLOW TEST:12.309 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:21:38.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 22 11:21:39.066: INFO: Waiting up to 5m0s for pod "client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-containers-hhvp9" to be "success or failure" May 22 11:21:39.083: INFO: Pod "client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.673213ms May 22 11:21:41.087: INFO: Pod "client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020870521s May 22 11:21:43.092: INFO: Pod "client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025575455s STEP: Saw pod success May 22 11:21:43.092: INFO: Pod "client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:21:43.095: INFO: Trying to get logs from node hunter-worker2 pod client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:21:43.135: INFO: Waiting for pod client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:21:43.194: INFO: Pod client-containers-67acabf9-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:21:43.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-hhvp9" for this suite. May 22 11:21:51.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:21:51.235: INFO: namespace: e2e-tests-containers-hhvp9, resource: bindings, ignored listing per whitelist May 22 11:21:51.284: INFO: namespace e2e-tests-containers-hhvp9 deletion completed in 8.085671288s • [SLOW TEST:12.300 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:21:51.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 22 11:21:51.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-r2dd5 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 22 11:22:03.156: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0522 11:22:03.078669 1188 log.go:172] (0xc000138160) (0xc0005f6140) Create stream\nI0522 11:22:03.078707 1188 log.go:172] (0xc000138160) (0xc0005f6140) Stream added, broadcasting: 1\nI0522 11:22:03.080804 1188 log.go:172] (0xc000138160) Reply frame received for 1\nI0522 11:22:03.080848 1188 log.go:172] (0xc000138160) (0xc000752460) Create stream\nI0522 11:22:03.080857 1188 log.go:172] (0xc000138160) (0xc000752460) Stream added, broadcasting: 3\nI0522 11:22:03.082089 1188 log.go:172] (0xc000138160) Reply frame received for 3\nI0522 11:22:03.082151 1188 log.go:172] (0xc000138160) (0xc0005f61e0) Create stream\nI0522 11:22:03.082169 1188 log.go:172] (0xc000138160) (0xc0005f61e0) Stream added, broadcasting: 5\nI0522 11:22:03.083181 1188 log.go:172] (0xc000138160) Reply frame received for 5\nI0522 11:22:03.083223 1188 log.go:172] (0xc000138160) (0xc0005f6280) Create stream\nI0522 11:22:03.083240 1188 log.go:172] (0xc000138160) (0xc0005f6280) Stream added, broadcasting: 7\nI0522 11:22:03.084208 1188 log.go:172] (0xc000138160) Reply frame received for 7\nI0522 11:22:03.084334 1188 log.go:172] (0xc000752460) (3) Writing data frame\nI0522 11:22:03.084403 1188 log.go:172] (0xc000752460) (3) Writing data frame\nI0522 11:22:03.085625 1188 log.go:172] (0xc000138160) Data frame received for 5\nI0522 11:22:03.085646 1188 log.go:172] (0xc0005f61e0) (5) Data frame handling\nI0522 11:22:03.085666 1188 log.go:172] (0xc0005f61e0) (5) Data frame sent\nI0522 11:22:03.086192 1188 log.go:172] (0xc000138160) Data frame received for 5\nI0522 11:22:03.086203 1188 log.go:172] (0xc0005f61e0) (5) Data frame handling\nI0522 11:22:03.086210 1188 log.go:172] (0xc0005f61e0) (5) Data frame sent\nI0522 11:22:03.127570 1188 log.go:172] (0xc000138160) Data frame received for 5\nI0522 11:22:03.127609 1188 log.go:172] (0xc0005f61e0) (5) Data frame handling\nI0522 11:22:03.127641 1188 log.go:172] (0xc000138160) Data frame received for 7\nI0522 11:22:03.127659 1188 log.go:172] (0xc0005f6280) (7) Data frame handling\nI0522 11:22:03.128292 1188 log.go:172] (0xc000138160) (0xc000752460) Stream removed, broadcasting: 3\nI0522 11:22:03.128345 1188 log.go:172] (0xc000138160) Data frame received for 1\nI0522 11:22:03.128366 1188 log.go:172] (0xc0005f6140) (1) Data frame handling\nI0522 11:22:03.128382 1188 log.go:172] (0xc0005f6140) (1) Data frame sent\nI0522 11:22:03.128403 1188 log.go:172] (0xc000138160) (0xc0005f6140) Stream removed, broadcasting: 1\nI0522 11:22:03.128529 1188 log.go:172] (0xc000138160) (0xc0005f6140) Stream removed, broadcasting: 1\nI0522 11:22:03.128558 1188 log.go:172] (0xc000138160) (0xc000752460) Stream removed, broadcasting: 3\nI0522 11:22:03.128578 1188 log.go:172] (0xc000138160) (0xc0005f61e0) Stream removed, broadcasting: 5\nI0522 11:22:03.129013 1188 log.go:172] (0xc000138160) (0xc0005f6280) Stream removed, broadcasting: 7\nI0522 11:22:03.130178 1188 log.go:172] (0xc000138160) Go away received\n" May 22 11:22:03.156: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:22:05.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r2dd5" for this suite. May 22 11:22:13.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:22:13.251: INFO: namespace: e2e-tests-kubectl-r2dd5, resource: bindings, ignored listing per whitelist May 22 11:22:13.324: INFO: namespace e2e-tests-kubectl-r2dd5 deletion completed in 8.159094578s • [SLOW TEST:22.040 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:22:13.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 22 11:22:13.447: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:22:20.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vfnx2" for this suite. May 22 11:22:26.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:22:26.197: INFO: namespace: e2e-tests-init-container-vfnx2, resource: bindings, ignored listing per whitelist May 22 11:22:26.206: INFO: namespace e2e-tests-init-container-vfnx2 deletion completed in 6.095250239s • [SLOW TEST:12.881 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:22:26.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-83ded367-9c1e-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:22:26.377: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-f8wsg" to be "success or failure" May 22 11:22:26.415: INFO: Pod "pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.725306ms May 22 11:22:28.552: INFO: Pod "pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174858637s May 22 11:22:30.557: INFO: Pod "pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.17912708s May 22 11:22:32.561: INFO: Pod "pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183427738s STEP: Saw pod success May 22 11:22:32.561: INFO: Pod "pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:22:32.564: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 22 11:22:32.586: INFO: Waiting for pod pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:22:32.591: INFO: Pod pod-projected-secrets-83df8294-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:22:32.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f8wsg" for this suite. May 22 11:22:38.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:22:38.671: INFO: namespace: e2e-tests-projected-f8wsg, resource: bindings, ignored listing per whitelist May 22 11:22:38.702: INFO: namespace e2e-tests-projected-f8wsg deletion completed in 6.108452082s • [SLOW TEST:12.496 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:22:38.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-8b81df96-9c1e-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:22:39.359: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-r4xrq" to be "success or failure" May 22 11:22:39.456: INFO: Pod "pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 97.446431ms May 22 11:22:41.523: INFO: Pod "pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164479254s May 22 11:22:43.865: INFO: Pod "pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.50615896s May 22 11:22:45.869: INFO: Pod "pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.510050196s STEP: Saw pod success May 22 11:22:45.869: INFO: Pod "pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:22:45.872: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 22 11:22:46.193: INFO: Waiting for pod pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:22:46.308: INFO: Pod pod-projected-secrets-8b85fdd8-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:22:46.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r4xrq" for this suite. May 22 11:22:54.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:22:54.779: INFO: namespace: e2e-tests-projected-r4xrq, resource: bindings, ignored listing per whitelist May 22 11:22:54.794: INFO: namespace e2e-tests-projected-r4xrq deletion completed in 8.481891385s • [SLOW TEST:16.091 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:22:54.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 22 11:22:55.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lwkll' May 22 11:22:56.646: INFO: stderr: "" May 22 11:22:56.646: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 11:22:56.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:22:56.945: INFO: stderr: "" May 22 11:22:56.945: INFO: stdout: "update-demo-nautilus-pcwxd update-demo-nautilus-q8zwt " May 22 11:22:56.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcwxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:22:57.112: INFO: stderr: "" May 22 11:22:57.112: INFO: stdout: "" May 22 11:22:57.112: INFO: update-demo-nautilus-pcwxd is created but not running May 22 11:23:02.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:02.590: INFO: stderr: "" May 22 11:23:02.590: INFO: stdout: "update-demo-nautilus-pcwxd update-demo-nautilus-q8zwt " May 22 11:23:02.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcwxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:02.698: INFO: stderr: "" May 22 11:23:02.698: INFO: stdout: "true" May 22 11:23:02.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcwxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:02.804: INFO: stderr: "" May 22 11:23:02.804: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:23:02.804: INFO: validating pod update-demo-nautilus-pcwxd May 22 11:23:02.871: INFO: got data: { "image": "nautilus.jpg" } May 22 11:23:02.871: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:23:02.871: INFO: update-demo-nautilus-pcwxd is verified up and running May 22 11:23:02.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8zwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:02.975: INFO: stderr: "" May 22 11:23:02.975: INFO: stdout: "true" May 22 11:23:02.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8zwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:03.065: INFO: stderr: "" May 22 11:23:03.065: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:23:03.066: INFO: validating pod update-demo-nautilus-q8zwt May 22 11:23:03.130: INFO: got data: { "image": "nautilus.jpg" } May 22 11:23:03.130: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:23:03.130: INFO: update-demo-nautilus-q8zwt is verified up and running STEP: scaling down the replication controller May 22 11:23:03.133: INFO: scanned /root for discovery docs: May 22 11:23:03.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:04.272: INFO: stderr: "" May 22 11:23:04.272: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 11:23:04.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:04.376: INFO: stderr: "" May 22 11:23:04.376: INFO: stdout: "update-demo-nautilus-pcwxd update-demo-nautilus-q8zwt " STEP: Replicas for name=update-demo: expected=1 actual=2 May 22 11:23:09.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:09.480: INFO: stderr: "" May 22 11:23:09.480: INFO: stdout: "update-demo-nautilus-pcwxd update-demo-nautilus-q8zwt " STEP: Replicas for name=update-demo: expected=1 actual=2 May 22 11:23:14.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:14.585: INFO: stderr: "" May 22 11:23:14.585: INFO: stdout: "update-demo-nautilus-q8zwt " May 22 11:23:14.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8zwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:14.691: INFO: stderr: "" May 22 11:23:14.691: INFO: stdout: "true" May 22 11:23:14.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8zwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:14.786: INFO: stderr: "" May 22 11:23:14.786: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:23:14.786: INFO: validating pod update-demo-nautilus-q8zwt May 22 11:23:14.790: INFO: got data: { "image": "nautilus.jpg" } May 22 11:23:14.790: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:23:14.790: INFO: update-demo-nautilus-q8zwt is verified up and running STEP: scaling up the replication controller May 22 11:23:14.793: INFO: scanned /root for discovery docs: May 22 11:23:14.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:15.936: INFO: stderr: "" May 22 11:23:15.937: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 11:23:15.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:16.119: INFO: stderr: "" May 22 11:23:16.119: INFO: stdout: "update-demo-nautilus-jzqfj update-demo-nautilus-q8zwt " May 22 11:23:16.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jzqfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:16.224: INFO: stderr: "" May 22 11:23:16.224: INFO: stdout: "" May 22 11:23:16.224: INFO: update-demo-nautilus-jzqfj is created but not running May 22 11:23:21.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:21.340: INFO: stderr: "" May 22 11:23:21.340: INFO: stdout: "update-demo-nautilus-jzqfj update-demo-nautilus-q8zwt " May 22 11:23:21.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jzqfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:21.439: INFO: stderr: "" May 22 11:23:21.439: INFO: stdout: "true" May 22 11:23:21.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jzqfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:21.545: INFO: stderr: "" May 22 11:23:21.545: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:23:21.545: INFO: validating pod update-demo-nautilus-jzqfj May 22 11:23:21.549: INFO: got data: { "image": "nautilus.jpg" } May 22 11:23:21.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:23:21.549: INFO: update-demo-nautilus-jzqfj is verified up and running May 22 11:23:21.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8zwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:21.695: INFO: stderr: "" May 22 11:23:21.695: INFO: stdout: "true" May 22 11:23:21.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8zwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:21.798: INFO: stderr: "" May 22 11:23:21.798: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 11:23:21.798: INFO: validating pod update-demo-nautilus-q8zwt May 22 11:23:21.801: INFO: got data: { "image": "nautilus.jpg" } May 22 11:23:21.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 11:23:21.801: INFO: update-demo-nautilus-q8zwt is verified up and running STEP: using delete to clean up resources May 22 11:23:21.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:21.902: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 11:23:21.902: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 22 11:23:21.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-lwkll' May 22 11:23:22.097: INFO: stderr: "No resources found.\n" May 22 11:23:22.097: INFO: stdout: "" May 22 11:23:22.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-lwkll -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 11:23:22.225: INFO: stderr: "" May 22 11:23:22.225: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:23:22.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lwkll" for this suite. May 22 11:23:46.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:23:46.313: INFO: namespace: e2e-tests-kubectl-lwkll, resource: bindings, ignored listing per whitelist May 22 11:23:46.350: INFO: namespace e2e-tests-kubectl-lwkll deletion completed in 24.120632062s • [SLOW TEST:51.556 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:23:46.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:23:46.485: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-29v8c" to be "success or failure" May 22 11:23:46.513: INFO: Pod "downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.089509ms May 22 11:23:48.633: INFO: Pod "downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147691013s May 22 11:23:50.654: INFO: Pod "downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168689351s STEP: Saw pod success May 22 11:23:50.654: INFO: Pod "downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:23:50.657: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:23:50.790: INFO: Waiting for pod downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:23:50.799: INFO: Pod downwardapi-volume-b39e0408-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:23:50.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-29v8c" for this suite. May 22 11:23:56.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:23:56.889: INFO: namespace: e2e-tests-projected-29v8c, resource: bindings, ignored listing per whitelist May 22 11:23:56.891: INFO: namespace e2e-tests-projected-29v8c deletion completed in 6.088922061s • [SLOW TEST:10.541 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:23:56.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 11:23:56.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-lrvw8' May 22 11:23:57.125: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 11:23:57.126: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 22 11:23:57.130: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 22 11:23:57.163: INFO: scanned /root for discovery docs: May 22 11:23:57.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-lrvw8' May 22 11:24:12.961: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 22 11:24:12.961: INFO: stdout: "Created e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec\nScaling up e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 22 11:24:12.961: INFO: stdout: "Created e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec\nScaling up e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 22 11:24:12.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lrvw8' May 22 11:24:13.072: INFO: stderr: "" May 22 11:24:13.072: INFO: stdout: "e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec-q7s5h " May 22 11:24:13.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec-q7s5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrvw8' May 22 11:24:13.183: INFO: stderr: "" May 22 11:24:13.183: INFO: stdout: "true" May 22 11:24:13.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec-q7s5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrvw8' May 22 11:24:13.278: INFO: stderr: "" May 22 11:24:13.278: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 22 11:24:13.278: INFO: e2e-test-nginx-rc-826127cd10e4fbfc004e69717a59d5ec-q7s5h is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 22 11:24:13.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lrvw8' May 22 11:24:13.386: INFO: stderr: "" May 22 11:24:13.386: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:24:13.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lrvw8" for this suite. May 22 11:24:19.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:24:19.491: INFO: namespace: e2e-tests-kubectl-lrvw8, resource: bindings, ignored listing per whitelist May 22 11:24:19.530: INFO: namespace e2e-tests-kubectl-lrvw8 deletion completed in 6.118531981s • [SLOW TEST:22.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:24:19.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:24:19.653: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:24:23.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rzts2" for this suite. May 22 11:25:01.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:25:01.818: INFO: namespace: e2e-tests-pods-rzts2, resource: bindings, ignored listing per whitelist May 22 11:25:01.823: INFO: namespace e2e-tests-pods-rzts2 deletion completed in 38.114830898s • [SLOW TEST:42.293 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:25:01.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e09bbe47-9c1e-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:25:01.967: INFO: Waiting up to 5m0s for pod "pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-wjp7x" to be "success or failure" May 22 11:25:01.971: INFO: Pod "pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419014ms May 22 11:25:03.975: INFO: Pod "pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007401705s May 22 11:25:05.979: INFO: Pod "pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01114357s STEP: Saw pod success May 22 11:25:05.979: INFO: Pod "pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:25:05.981: INFO: Trying to get logs from node hunter-worker pod pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 11:25:06.017: INFO: Waiting for pod pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:25:06.032: INFO: Pod pod-secrets-e09d8e42-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:25:06.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wjp7x" for this suite. May 22 11:25:12.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:25:12.092: INFO: namespace: e2e-tests-secrets-wjp7x, resource: bindings, ignored listing per whitelist May 22 11:25:12.117: INFO: namespace e2e-tests-secrets-wjp7x deletion completed in 6.081104406s • [SLOW TEST:10.293 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:25:12.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-8xnbp/configmap-test-e6b4234b-9c1e-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:25:12.203: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-8xnbp" to be "success or failure" May 22 11:25:12.218: INFO: Pod "pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.320338ms May 22 11:25:14.222: INFO: Pod "pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018682828s May 22 11:25:16.227: INFO: Pod "pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023311881s STEP: Saw pod success May 22 11:25:16.227: INFO: Pod "pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:25:16.230: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018 container env-test: STEP: delete the pod May 22 11:25:16.248: INFO: Waiting for pod pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018 to disappear May 22 11:25:16.253: INFO: Pod pod-configmaps-e6b54c6c-9c1e-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:25:16.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8xnbp" for this suite. May 22 11:25:22.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:25:22.313: INFO: namespace: e2e-tests-configmap-8xnbp, resource: bindings, ignored listing per whitelist May 22 11:25:22.382: INFO: namespace e2e-tests-configmap-8xnbp deletion completed in 6.125283791s • [SLOW TEST:10.265 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:25:22.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 22 11:25:22.520: INFO: Pod name pod-release: Found 0 pods out of 1 May 22 11:25:27.547: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:25:28.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-kbrkl" for this suite. May 22 11:25:34.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:25:34.817: INFO: namespace: e2e-tests-replication-controller-kbrkl, resource: bindings, ignored listing per whitelist May 22 11:25:34.870: INFO: namespace e2e-tests-replication-controller-kbrkl deletion completed in 6.286598207s • [SLOW TEST:12.488 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:25:34.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 22 11:25:43.116: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 11:25:43.119: INFO: Pod pod-with-poststart-http-hook still exists May 22 11:25:45.119: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 11:25:45.123: INFO: Pod pod-with-poststart-http-hook still exists May 22 11:25:47.119: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 11:25:47.123: INFO: Pod pod-with-poststart-http-hook still exists May 22 11:25:49.119: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 11:25:49.123: INFO: Pod pod-with-poststart-http-hook still exists May 22 11:25:51.119: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 11:25:51.162: INFO: Pod pod-with-poststart-http-hook still exists May 22 11:25:53.119: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 11:25:53.122: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:25:53.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xv87w" for this suite. May 22 11:26:15.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:26:15.179: INFO: namespace: e2e-tests-container-lifecycle-hook-xv87w, resource: bindings, ignored listing per whitelist May 22 11:26:15.263: INFO: namespace e2e-tests-container-lifecycle-hook-xv87w deletion completed in 22.136154527s • [SLOW TEST:40.392 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:26:15.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:26:15.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-p9rrt" for this suite. May 22 11:26:21.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:26:21.432: INFO: namespace: e2e-tests-services-p9rrt, resource: bindings, ignored listing per whitelist May 22 11:26:21.469: INFO: namespace e2e-tests-services-p9rrt deletion completed in 6.105956339s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.206 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:26:21.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-100e808c-9c1f-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:26:21.602: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-d5fpk" to be "success or failure" May 22 11:26:21.605: INFO: Pod "pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.060643ms May 22 11:26:23.671: INFO: Pod "pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068813208s May 22 11:26:25.676: INFO: Pod "pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073332885s STEP: Saw pod success May 22 11:26:25.676: INFO: Pod "pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:26:25.680: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 22 11:26:25.702: INFO: Waiting for pod pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018 to disappear May 22 11:26:25.706: INFO: Pod pod-projected-secrets-10121e0f-9c1f-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:26:25.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d5fpk" for this suite. May 22 11:26:31.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:26:31.980: INFO: namespace: e2e-tests-projected-d5fpk, resource: bindings, ignored listing per whitelist May 22 11:26:32.005: INFO: namespace e2e-tests-projected-d5fpk deletion completed in 6.295720297s • [SLOW TEST:10.536 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:26:32.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 22 11:26:32.132: INFO: PodSpec: initContainers in spec.initContainers May 22 11:27:20.711: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-165cdd23-9c1f-11ea-8e9c-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-m5c2m", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-m5c2m/pods/pod-init-165cdd23-9c1f-11ea-8e9c-0242ac110018", UID:"165faac8-9c1f-11ea-99e8-0242ac110002", ResourceVersion:"11916432", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725743592, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"132540691"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-p4btw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001790500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p4btw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p4btw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p4btw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0009a3a38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0017e52c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0009a3b00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0009a3b20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0009a3b28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0009a3b2c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725743592, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725743592, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725743592, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725743592, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.225", StartTime:(*v1.Time)(0xc002436e20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003a2bd0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003a2c40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://fbd4f2f5886402847f342ff2d89c615b4a6981934345c75074f9b85e956a0abf"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002436e60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002436e40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:27:20.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-m5c2m" for this suite. May 22 11:27:42.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:27:42.816: INFO: namespace: e2e-tests-init-container-m5c2m, resource: bindings, ignored listing per whitelist May 22 11:27:42.879: INFO: namespace e2e-tests-init-container-m5c2m deletion completed in 22.088629147s • [SLOW TEST:70.874 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:27:42.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:27:43.014: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 22 11:27:43.021: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:43.023: INFO: Number of nodes with available pods: 0 May 22 11:27:43.023: INFO: Node hunter-worker is running more than one daemon pod May 22 11:27:44.028: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:44.031: INFO: Number of nodes with available pods: 0 May 22 11:27:44.031: INFO: Node hunter-worker is running more than one daemon pod May 22 11:27:45.027: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:45.031: INFO: Number of nodes with available pods: 0 May 22 11:27:45.031: INFO: Node hunter-worker is running more than one daemon pod May 22 11:27:46.121: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:46.171: INFO: Number of nodes with available pods: 0 May 22 11:27:46.171: INFO: Node hunter-worker is running more than one daemon pod May 22 11:27:47.044: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:47.048: INFO: Number of nodes with available pods: 0 May 22 11:27:47.048: INFO: Node hunter-worker is running more than one daemon pod May 22 11:27:48.028: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:48.031: INFO: Number of nodes with available pods: 2 May 22 11:27:48.031: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 22 11:27:48.063: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:48.063: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:48.098: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:49.145: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:49.146: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:49.150: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:50.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:50.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:50.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:51.107: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:51.107: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:51.110: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:52.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:52.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:52.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:52.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:53.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:53.102: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:53.102: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:53.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:54.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:54.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:54.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:54.108: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:55.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:55.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:55.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:55.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:56.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:56.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:56.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:56.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:57.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:57.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:57.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:57.108: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:58.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:58.102: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:58.102: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:58.105: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:27:59.115: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:59.115: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:27:59.115: INFO: Pod daemon-set-vrs4l is not available May 22 11:27:59.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:00.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:00.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:00.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:28:00.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:01.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:01.103: INFO: Wrong image for pod: daemon-set-vrs4l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:01.103: INFO: Pod daemon-set-vrs4l is not available May 22 11:28:01.108: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:02.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:02.103: INFO: Pod daemon-set-pmznx is not available May 22 11:28:02.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:03.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:03.102: INFO: Pod daemon-set-pmznx is not available May 22 11:28:03.105: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:04.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:04.102: INFO: Pod daemon-set-pmznx is not available May 22 11:28:04.106: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:05.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:05.106: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:06.103: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:06.108: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:07.102: INFO: Wrong image for pod: daemon-set-fwlwm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 11:28:07.102: INFO: Pod daemon-set-fwlwm is not available May 22 11:28:07.106: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:08.103: INFO: Pod daemon-set-sf2z6 is not available May 22 11:28:08.107: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 22 11:28:08.110: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:08.112: INFO: Number of nodes with available pods: 1 May 22 11:28:08.112: INFO: Node hunter-worker2 is running more than one daemon pod May 22 11:28:09.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:09.128: INFO: Number of nodes with available pods: 1 May 22 11:28:09.128: INFO: Node hunter-worker2 is running more than one daemon pod May 22 11:28:10.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:10.120: INFO: Number of nodes with available pods: 1 May 22 11:28:10.120: INFO: Node hunter-worker2 is running more than one daemon pod May 22 11:28:11.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:28:11.122: INFO: Number of nodes with available pods: 2 May 22 11:28:11.122: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b4twk, will wait for the garbage collector to delete the pods May 22 11:28:11.231: INFO: Deleting DaemonSet.extensions daemon-set took: 21.840146ms May 22 11:28:11.331: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237924ms May 22 11:28:21.734: INFO: Number of nodes with available pods: 0 May 22 11:28:21.734: INFO: Number of running nodes: 0, number of available pods: 0 May 22 11:28:21.736: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b4twk/daemonsets","resourceVersion":"11916642"},"items":null} May 22 11:28:21.739: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b4twk/pods","resourceVersion":"11916642"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:28:21.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-b4twk" for this suite. May 22 11:28:27.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:28:27.807: INFO: namespace: e2e-tests-daemonsets-b4twk, resource: bindings, ignored listing per whitelist May 22 11:28:27.834: INFO: namespace e2e-tests-daemonsets-b4twk deletion completed in 6.082776359s • [SLOW TEST:44.955 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:28:27.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5b61071c-9c1f-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:28:27.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-wrfrr" to be "success or failure" May 22 11:28:27.946: INFO: Pod "pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.918518ms May 22 11:28:29.949: INFO: Pod "pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007718285s May 22 11:28:31.960: INFO: Pod "pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017775375s STEP: Saw pod success May 22 11:28:31.960: INFO: Pod "pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:28:31.962: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 11:28:31.993: INFO: Waiting for pod pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018 to disappear May 22 11:28:32.010: INFO: Pod pod-projected-configmaps-5b62d799-9c1f-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:28:32.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wrfrr" for this suite. May 22 11:28:38.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:28:38.268: INFO: namespace: e2e-tests-projected-wrfrr, resource: bindings, ignored listing per whitelist May 22 11:28:38.320: INFO: namespace e2e-tests-projected-wrfrr deletion completed in 6.306201442s • [SLOW TEST:10.486 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:28:38.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 22 11:28:38.986: INFO: Pod name wrapped-volume-race-61f2d464-9c1f-11ea-8e9c-0242ac110018: Found 0 pods out of 5 May 22 11:28:43.995: INFO: Pod name wrapped-volume-race-61f2d464-9c1f-11ea-8e9c-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-61f2d464-9c1f-11ea-8e9c-0242ac110018 in namespace e2e-tests-emptydir-wrapper-lv76j, will wait for the garbage collector to delete the pods May 22 11:31:28.081: INFO: Deleting ReplicationController wrapped-volume-race-61f2d464-9c1f-11ea-8e9c-0242ac110018 took: 8.705731ms May 22 11:31:28.282: INFO: Terminating ReplicationController wrapped-volume-race-61f2d464-9c1f-11ea-8e9c-0242ac110018 pods took: 200.640666ms STEP: Creating RC which spawns configmap-volume pods May 22 11:32:12.517: INFO: Pod name wrapped-volume-race-e13a27d8-9c1f-11ea-8e9c-0242ac110018: Found 0 pods out of 5 May 22 11:32:17.524: INFO: Pod name wrapped-volume-race-e13a27d8-9c1f-11ea-8e9c-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e13a27d8-9c1f-11ea-8e9c-0242ac110018 in namespace e2e-tests-emptydir-wrapper-lv76j, will wait for the garbage collector to delete the pods May 22 11:34:49.611: INFO: Deleting ReplicationController wrapped-volume-race-e13a27d8-9c1f-11ea-8e9c-0242ac110018 took: 8.765102ms May 22 11:34:49.711: INFO: Terminating ReplicationController wrapped-volume-race-e13a27d8-9c1f-11ea-8e9c-0242ac110018 pods took: 100.254354ms STEP: Creating RC which spawns configmap-volume pods May 22 11:35:32.350: INFO: Pod name wrapped-volume-race-5855b8c1-9c20-11ea-8e9c-0242ac110018: Found 0 pods out of 5 May 22 11:35:37.358: INFO: Pod name wrapped-volume-race-5855b8c1-9c20-11ea-8e9c-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5855b8c1-9c20-11ea-8e9c-0242ac110018 in namespace e2e-tests-emptydir-wrapper-lv76j, will wait for the garbage collector to delete the pods May 22 11:38:11.441: INFO: Deleting ReplicationController wrapped-volume-race-5855b8c1-9c20-11ea-8e9c-0242ac110018 took: 7.627545ms May 22 11:38:11.542: INFO: Terminating ReplicationController wrapped-volume-race-5855b8c1-9c20-11ea-8e9c-0242ac110018 pods took: 100.276519ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:38:53.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-lv76j" for this suite. May 22 11:39:01.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:39:01.739: INFO: namespace: e2e-tests-emptydir-wrapper-lv76j, resource: bindings, ignored listing per whitelist May 22 11:39:01.775: INFO: namespace e2e-tests-emptydir-wrapper-lv76j deletion completed in 8.104237114s • [SLOW TEST:623.454 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:39:01.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 22 11:39:01.861: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:39:01.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zr8qp" for this suite. May 22 11:39:08.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:39:08.027: INFO: namespace: e2e-tests-kubectl-zr8qp, resource: bindings, ignored listing per whitelist May 22 11:39:08.092: INFO: namespace e2e-tests-kubectl-zr8qp deletion completed in 6.114304658s • [SLOW TEST:6.317 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:39:08.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d902ffae-9c20-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:39:08.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-qgnpf" to be "success or failure" May 22 11:39:08.225: INFO: Pod "pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.365426ms May 22 11:39:10.401: INFO: Pod "pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192797015s May 22 11:39:12.405: INFO: Pod "pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.19714022s May 22 11:39:14.410: INFO: Pod "pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201333913s STEP: Saw pod success May 22 11:39:14.410: INFO: Pod "pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:39:14.413: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 11:39:14.451: INFO: Waiting for pod pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018 to disappear May 22 11:39:14.464: INFO: Pod pod-configmaps-d903cbe3-9c20-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:39:14.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qgnpf" for this suite. May 22 11:39:20.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:39:20.501: INFO: namespace: e2e-tests-configmap-qgnpf, resource: bindings, ignored listing per whitelist May 22 11:39:20.558: INFO: namespace e2e-tests-configmap-qgnpf deletion completed in 6.090466932s • [SLOW TEST:12.465 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:39:20.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e072bd67-9c20-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:39:20.680: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-95sww" to be "success or failure" May 22 11:39:20.684: INFO: Pod "pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787292ms May 22 11:39:22.688: INFO: Pod "pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007716573s May 22 11:39:24.692: INFO: Pod "pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011922544s STEP: Saw pod success May 22 11:39:24.692: INFO: Pod "pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:39:24.694: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 11:39:24.708: INFO: Waiting for pod pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018 to disappear May 22 11:39:24.713: INFO: Pod pod-configmaps-e0733496-9c20-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:39:24.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-95sww" for this suite. May 22 11:39:30.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:39:30.740: INFO: namespace: e2e-tests-configmap-95sww, resource: bindings, ignored listing per whitelist May 22 11:39:30.796: INFO: namespace e2e-tests-configmap-95sww deletion completed in 6.080641704s • [SLOW TEST:10.238 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:39:30.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e6852e0f-9c20-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:39:30.896: INFO: Waiting up to 5m0s for pod "pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-m5qhk" to be "success or failure" May 22 11:39:30.899: INFO: Pod "pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2946ms May 22 11:39:32.904: INFO: Pod "pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008098394s May 22 11:39:34.907: INFO: Pod "pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011645218s STEP: Saw pod success May 22 11:39:34.908: INFO: Pod "pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:39:34.910: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 11:39:34.983: INFO: Waiting for pod pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018 to disappear May 22 11:39:34.992: INFO: Pod pod-configmaps-e687e4ee-9c20-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:39:34.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m5qhk" for this suite. May 22 11:39:41.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:39:41.037: INFO: namespace: e2e-tests-configmap-m5qhk, resource: bindings, ignored listing per whitelist May 22 11:39:41.104: INFO: namespace e2e-tests-configmap-m5qhk deletion completed in 6.108275033s • [SLOW TEST:10.307 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:39:41.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0522 11:40:22.275452 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 11:40:22.275: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:40:22.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9x857" for this suite. May 22 11:40:30.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:40:30.407: INFO: namespace: e2e-tests-gc-9x857, resource: bindings, ignored listing per whitelist May 22 11:40:30.412: INFO: namespace e2e-tests-gc-9x857 deletion completed in 8.133094586s • [SLOW TEST:49.308 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:40:30.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 22 11:40:37.468: INFO: 0 pods remaining May 22 11:40:37.468: INFO: 0 pods has nil DeletionTimestamp May 22 11:40:37.468: INFO: STEP: Gathering metrics W0522 11:40:37.702099 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 11:40:37.702: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:40:37.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zdmp2" for this suite. May 22 11:40:45.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:40:45.797: INFO: namespace: e2e-tests-gc-zdmp2, resource: bindings, ignored listing per whitelist May 22 11:40:45.815: INFO: namespace e2e-tests-gc-zdmp2 deletion completed in 8.110406131s • [SLOW TEST:15.403 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:40:45.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-k52d6 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-k52d6 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-k52d6 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-k52d6 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-k52d6 May 22 11:40:50.042: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-k52d6, name: ss-0, uid: 14757243-9c21-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 22 11:40:51.244: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-k52d6, name: ss-0, uid: 14757243-9c21-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 22 11:40:51.301: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-k52d6, name: ss-0, uid: 14757243-9c21-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 22 11:40:51.332: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-k52d6 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-k52d6 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-k52d6 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 22 11:40:59.780: INFO: Deleting all statefulset in ns e2e-tests-statefulset-k52d6 May 22 11:40:59.782: INFO: Scaling statefulset ss to 0 May 22 11:41:09.970: INFO: Waiting for statefulset status.replicas updated to 0 May 22 11:41:09.973: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:41:10.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-k52d6" for this suite. May 22 11:41:16.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:41:16.338: INFO: namespace: e2e-tests-statefulset-k52d6, resource: bindings, ignored listing per whitelist May 22 11:41:16.358: INFO: namespace e2e-tests-statefulset-k52d6 deletion completed in 6.344423584s • [SLOW TEST:30.543 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:41:16.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:41:16.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-vsmhl" to be "success or failure" May 22 11:41:16.468: INFO: Pod "downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.743919ms May 22 11:41:18.534: INFO: Pod "downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069441985s May 22 11:41:20.538: INFO: Pod "downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073665963s STEP: Saw pod success May 22 11:41:20.538: INFO: Pod "downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:41:20.542: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:41:20.771: INFO: Waiting for pod downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:41:20.794: INFO: Pod downwardapi-volume-2574d5ec-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:41:20.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vsmhl" for this suite. May 22 11:41:26.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:41:26.831: INFO: namespace: e2e-tests-projected-vsmhl, resource: bindings, ignored listing per whitelist May 22 11:41:26.876: INFO: namespace e2e-tests-projected-vsmhl deletion completed in 6.078105495s • [SLOW TEST:10.518 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:41:26.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 11:41:27.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-7bvjw' May 22 11:41:30.006: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 11:41:30.006: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 22 11:41:32.044: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-9j9kp] May 22 11:41:32.044: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-9j9kp" in namespace "e2e-tests-kubectl-7bvjw" to be "running and ready" May 22 11:41:32.048: INFO: Pod "e2e-test-nginx-rc-9j9kp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.379337ms May 22 11:41:34.052: INFO: Pod "e2e-test-nginx-rc-9j9kp": Phase="Running", Reason="", readiness=true. Elapsed: 2.007454261s May 22 11:41:34.052: INFO: Pod "e2e-test-nginx-rc-9j9kp" satisfied condition "running and ready" May 22 11:41:34.052: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-9j9kp] May 22 11:41:34.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7bvjw' May 22 11:41:34.212: INFO: stderr: "" May 22 11:41:34.212: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 22 11:41:34.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-7bvjw' May 22 11:41:34.305: INFO: stderr: "" May 22 11:41:34.305: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:41:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7bvjw" for this suite. May 22 11:41:56.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:41:56.435: INFO: namespace: e2e-tests-kubectl-7bvjw, resource: bindings, ignored listing per whitelist May 22 11:41:56.454: INFO: namespace e2e-tests-kubectl-7bvjw deletion completed in 22.115158429s • [SLOW TEST:29.578 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:41:56.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:42:04.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5cx7t" for this suite. May 22 11:42:10.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:42:10.796: INFO: namespace: e2e-tests-kubelet-test-5cx7t, resource: bindings, ignored listing per whitelist May 22 11:42:10.966: INFO: namespace e2e-tests-kubelet-test-5cx7t deletion completed in 6.278422945s • [SLOW TEST:14.511 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:42:10.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-d7wbr STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 11:42:11.112: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 11:42:37.336: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.243:8080/dial?request=hostName&protocol=udp&host=10.244.1.231&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-d7wbr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 11:42:37.336: INFO: >>> kubeConfig: /root/.kube/config I0522 11:42:37.369383 6 log.go:172] (0xc000a48a50) (0xc00213d220) Create stream I0522 11:42:37.369434 6 log.go:172] (0xc000a48a50) (0xc00213d220) Stream added, broadcasting: 1 I0522 11:42:37.371886 6 log.go:172] (0xc000a48a50) Reply frame received for 1 I0522 11:42:37.371932 6 log.go:172] (0xc000a48a50) (0xc0023d1220) Create stream I0522 11:42:37.371949 6 log.go:172] (0xc000a48a50) (0xc0023d1220) Stream added, broadcasting: 3 I0522 11:42:37.372867 6 log.go:172] (0xc000a48a50) Reply frame received for 3 I0522 11:42:37.372888 6 log.go:172] (0xc000a48a50) (0xc000352d20) Create stream I0522 11:42:37.372899 6 log.go:172] (0xc000a48a50) (0xc000352d20) Stream added, broadcasting: 5 I0522 11:42:37.374075 6 log.go:172] (0xc000a48a50) Reply frame received for 5 I0522 11:42:37.602965 6 log.go:172] (0xc000a48a50) Data frame received for 3 I0522 11:42:37.602985 6 log.go:172] (0xc0023d1220) (3) Data frame handling I0522 11:42:37.602992 6 log.go:172] (0xc0023d1220) (3) Data frame sent I0522 11:42:37.603554 6 log.go:172] (0xc000a48a50) Data frame received for 5 I0522 11:42:37.603600 6 log.go:172] (0xc000352d20) (5) Data frame handling I0522 11:42:37.603670 6 log.go:172] (0xc000a48a50) Data frame received for 3 I0522 11:42:37.603704 6 log.go:172] (0xc0023d1220) (3) Data frame handling I0522 11:42:37.605877 6 log.go:172] (0xc000a48a50) Data frame received for 1 I0522 11:42:37.605894 6 log.go:172] (0xc00213d220) (1) Data frame handling I0522 11:42:37.605904 6 log.go:172] (0xc00213d220) (1) Data frame sent I0522 11:42:37.606112 6 log.go:172] (0xc000a48a50) (0xc00213d220) Stream removed, broadcasting: 1 I0522 11:42:37.606176 6 log.go:172] (0xc000a48a50) (0xc00213d220) Stream removed, broadcasting: 1 I0522 11:42:37.606186 6 log.go:172] (0xc000a48a50) (0xc0023d1220) Stream removed, broadcasting: 3 I0522 11:42:37.606298 6 log.go:172] (0xc000a48a50) (0xc000352d20) Stream removed, broadcasting: 5 May 22 11:42:37.606: INFO: Waiting for endpoints: map[] I0522 11:42:37.606693 6 log.go:172] (0xc000a48a50) Go away received May 22 11:42:37.610: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.243:8080/dial?request=hostName&protocol=udp&host=10.244.2.242&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-d7wbr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 11:42:37.610: INFO: >>> kubeConfig: /root/.kube/config I0522 11:42:37.638916 6 log.go:172] (0xc000a48f20) (0xc00213d720) Create stream I0522 11:42:37.638943 6 log.go:172] (0xc000a48f20) (0xc00213d720) Stream added, broadcasting: 1 I0522 11:42:37.641901 6 log.go:172] (0xc000a48f20) Reply frame received for 1 I0522 11:42:37.641952 6 log.go:172] (0xc000a48f20) (0xc00206d5e0) Create stream I0522 11:42:37.641971 6 log.go:172] (0xc000a48f20) (0xc00206d5e0) Stream added, broadcasting: 3 I0522 11:42:37.643091 6 log.go:172] (0xc000a48f20) Reply frame received for 3 I0522 11:42:37.643136 6 log.go:172] (0xc000a48f20) (0xc000353220) Create stream I0522 11:42:37.643151 6 log.go:172] (0xc000a48f20) (0xc000353220) Stream added, broadcasting: 5 I0522 11:42:37.644155 6 log.go:172] (0xc000a48f20) Reply frame received for 5 I0522 11:42:37.716758 6 log.go:172] (0xc000a48f20) Data frame received for 3 I0522 11:42:37.716788 6 log.go:172] (0xc00206d5e0) (3) Data frame handling I0522 11:42:37.716802 6 log.go:172] (0xc00206d5e0) (3) Data frame sent I0522 11:42:37.717356 6 log.go:172] (0xc000a48f20) Data frame received for 3 I0522 11:42:37.717425 6 log.go:172] (0xc00206d5e0) (3) Data frame handling I0522 11:42:37.717454 6 log.go:172] (0xc000a48f20) Data frame received for 5 I0522 11:42:37.717467 6 log.go:172] (0xc000353220) (5) Data frame handling I0522 11:42:37.719206 6 log.go:172] (0xc000a48f20) Data frame received for 1 I0522 11:42:37.719228 6 log.go:172] (0xc00213d720) (1) Data frame handling I0522 11:42:37.719243 6 log.go:172] (0xc00213d720) (1) Data frame sent I0522 11:42:37.719259 6 log.go:172] (0xc000a48f20) (0xc00213d720) Stream removed, broadcasting: 1 I0522 11:42:37.719277 6 log.go:172] (0xc000a48f20) Go away received I0522 11:42:37.719349 6 log.go:172] (0xc000a48f20) (0xc00213d720) Stream removed, broadcasting: 1 I0522 11:42:37.719363 6 log.go:172] (0xc000a48f20) (0xc00206d5e0) Stream removed, broadcasting: 3 I0522 11:42:37.719370 6 log.go:172] (0xc000a48f20) (0xc000353220) Stream removed, broadcasting: 5 May 22 11:42:37.719: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:42:37.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-d7wbr" for this suite. May 22 11:42:59.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:42:59.772: INFO: namespace: e2e-tests-pod-network-test-d7wbr, resource: bindings, ignored listing per whitelist May 22 11:42:59.867: INFO: namespace e2e-tests-pod-network-test-d7wbr deletion completed in 22.143813637s • [SLOW TEST:48.900 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:42:59.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 22 11:43:00.054: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-gkzst,SelfLink:/api/v1/namespaces/e2e-tests-watch-gkzst/configmaps/e2e-watch-test-resource-version,UID:6328041c-9c21-11ea-99e8-0242ac110002,ResourceVersion:11919441,Generation:0,CreationTimestamp:2020-05-22 11:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 11:43:00.054: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-gkzst,SelfLink:/api/v1/namespaces/e2e-tests-watch-gkzst/configmaps/e2e-watch-test-resource-version,UID:6328041c-9c21-11ea-99e8-0242ac110002,ResourceVersion:11919442,Generation:0,CreationTimestamp:2020-05-22 11:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:43:00.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-gkzst" for this suite. May 22 11:43:06.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:43:06.128: INFO: namespace: e2e-tests-watch-gkzst, resource: bindings, ignored listing per whitelist May 22 11:43:06.153: INFO: namespace e2e-tests-watch-gkzst deletion completed in 6.08932988s • [SLOW TEST:6.287 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:43:06.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-66e5d6bc-9c21-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 11:43:06.265: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-xw845" to be "success or failure" May 22 11:43:06.275: INFO: Pod "pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.291991ms May 22 11:43:08.278: INFO: Pod "pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013353268s May 22 11:43:10.283: INFO: Pod "pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01776947s STEP: Saw pod success May 22 11:43:10.283: INFO: Pod "pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:43:10.286: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 22 11:43:10.323: INFO: Waiting for pod pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:43:10.335: INFO: Pod pod-projected-configmaps-66e6813c-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:43:10.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xw845" for this suite. May 22 11:43:16.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:43:16.451: INFO: namespace: e2e-tests-projected-xw845, resource: bindings, ignored listing per whitelist May 22 11:43:16.458: INFO: namespace e2e-tests-projected-xw845 deletion completed in 6.119613261s • [SLOW TEST:10.305 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:43:16.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 22 11:43:16.587: INFO: Waiting up to 5m0s for pod "pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-gfqwk" to be "success or failure" May 22 11:43:16.591: INFO: Pod "pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380957ms May 22 11:43:18.596: INFO: Pod "pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008876683s May 22 11:43:20.599: INFO: Pod "pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012585671s STEP: Saw pod success May 22 11:43:20.599: INFO: Pod "pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:43:20.601: INFO: Trying to get logs from node hunter-worker2 pod pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:43:20.631: INFO: Waiting for pod pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:43:20.638: INFO: Pod pod-6d0e5d02-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:43:20.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gfqwk" for this suite. May 22 11:43:26.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:43:26.712: INFO: namespace: e2e-tests-emptydir-gfqwk, resource: bindings, ignored listing per whitelist May 22 11:43:26.713: INFO: namespace e2e-tests-emptydir-gfqwk deletion completed in 6.071835773s • [SLOW TEST:10.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:43:26.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:43:26.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-6gnds" to be "success or failure" May 22 11:43:26.813: INFO: Pod "downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615748ms May 22 11:43:28.854: INFO: Pod "downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043667558s May 22 11:43:30.890: INFO: Pod "downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080393278s May 22 11:43:32.894: INFO: Pod "downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084503759s STEP: Saw pod success May 22 11:43:32.895: INFO: Pod "downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:43:32.898: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:43:32.918: INFO: Waiting for pod downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:43:32.955: INFO: Pod downwardapi-volume-7325e98e-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:43:32.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6gnds" for this suite. May 22 11:43:38.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:43:39.053: INFO: namespace: e2e-tests-downward-api-6gnds, resource: bindings, ignored listing per whitelist May 22 11:43:39.060: INFO: namespace e2e-tests-downward-api-6gnds deletion completed in 6.100793097s • [SLOW TEST:12.346 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:43:39.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-7a87bd3e-9c21-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:43:39.212: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-mp2ss" to be "success or failure" May 22 11:43:39.216: INFO: Pod "pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572012ms May 22 11:43:41.346: INFO: Pod "pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133300304s May 22 11:43:43.350: INFO: Pod "pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.137766679s May 22 11:43:45.354: INFO: Pod "pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141823933s STEP: Saw pod success May 22 11:43:45.354: INFO: Pod "pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:43:45.357: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 11:43:45.395: INFO: Waiting for pod pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:43:45.423: INFO: Pod pod-projected-secrets-7a88e019-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:43:45.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mp2ss" for this suite. May 22 11:43:51.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:43:51.868: INFO: namespace: e2e-tests-projected-mp2ss, resource: bindings, ignored listing per whitelist May 22 11:43:51.907: INFO: namespace e2e-tests-projected-mp2ss deletion completed in 6.480499351s • [SLOW TEST:12.848 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:43:51.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qhqbc STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 11:43:52.055: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 11:44:16.379: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.235:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qhqbc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 11:44:16.379: INFO: >>> kubeConfig: /root/.kube/config I0522 11:44:16.409252 6 log.go:172] (0xc0015ec2c0) (0xc000fff7c0) Create stream I0522 11:44:16.409302 6 log.go:172] (0xc0015ec2c0) (0xc000fff7c0) Stream added, broadcasting: 1 I0522 11:44:16.411482 6 log.go:172] (0xc0015ec2c0) Reply frame received for 1 I0522 11:44:16.411536 6 log.go:172] (0xc0015ec2c0) (0xc0014fee60) Create stream I0522 11:44:16.411553 6 log.go:172] (0xc0015ec2c0) (0xc0014fee60) Stream added, broadcasting: 3 I0522 11:44:16.413933 6 log.go:172] (0xc0015ec2c0) Reply frame received for 3 I0522 11:44:16.413991 6 log.go:172] (0xc0015ec2c0) (0xc000fff860) Create stream I0522 11:44:16.414021 6 log.go:172] (0xc0015ec2c0) (0xc000fff860) Stream added, broadcasting: 5 I0522 11:44:16.415060 6 log.go:172] (0xc0015ec2c0) Reply frame received for 5 I0522 11:44:16.484753 6 log.go:172] (0xc0015ec2c0) Data frame received for 5 I0522 11:44:16.484778 6 log.go:172] (0xc000fff860) (5) Data frame handling I0522 11:44:16.484857 6 log.go:172] (0xc0015ec2c0) Data frame received for 3 I0522 11:44:16.484902 6 log.go:172] (0xc0014fee60) (3) Data frame handling I0522 11:44:16.484929 6 log.go:172] (0xc0014fee60) (3) Data frame sent I0522 11:44:16.484943 6 log.go:172] (0xc0015ec2c0) Data frame received for 3 I0522 11:44:16.484962 6 log.go:172] (0xc0014fee60) (3) Data frame handling I0522 11:44:16.486946 6 log.go:172] (0xc0015ec2c0) Data frame received for 1 I0522 11:44:16.486975 6 log.go:172] (0xc000fff7c0) (1) Data frame handling I0522 11:44:16.486998 6 log.go:172] (0xc000fff7c0) (1) Data frame sent I0522 11:44:16.487013 6 log.go:172] (0xc0015ec2c0) (0xc000fff7c0) Stream removed, broadcasting: 1 I0522 11:44:16.487038 6 log.go:172] (0xc0015ec2c0) Go away received I0522 11:44:16.487132 6 log.go:172] (0xc0015ec2c0) (0xc000fff7c0) Stream removed, broadcasting: 1 I0522 11:44:16.487147 6 log.go:172] (0xc0015ec2c0) (0xc0014fee60) Stream removed, broadcasting: 3 I0522 11:44:16.487153 6 log.go:172] (0xc0015ec2c0) (0xc000fff860) Stream removed, broadcasting: 5 May 22 11:44:16.487: INFO: Found all expected endpoints: [netserver-0] May 22 11:44:16.490: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.245:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qhqbc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 11:44:16.490: INFO: >>> kubeConfig: /root/.kube/config I0522 11:44:16.523750 6 log.go:172] (0xc0015ec790) (0xc000fffa40) Create stream I0522 11:44:16.523775 6 log.go:172] (0xc0015ec790) (0xc000fffa40) Stream added, broadcasting: 1 I0522 11:44:16.538494 6 log.go:172] (0xc0015ec790) Reply frame received for 1 I0522 11:44:16.538551 6 log.go:172] (0xc0015ec790) (0xc0014fef00) Create stream I0522 11:44:16.538563 6 log.go:172] (0xc0015ec790) (0xc0014fef00) Stream added, broadcasting: 3 I0522 11:44:16.540052 6 log.go:172] (0xc0015ec790) Reply frame received for 3 I0522 11:44:16.540187 6 log.go:172] (0xc0015ec790) (0xc0014fefa0) Create stream I0522 11:44:16.540221 6 log.go:172] (0xc0015ec790) (0xc0014fefa0) Stream added, broadcasting: 5 I0522 11:44:16.541488 6 log.go:172] (0xc0015ec790) Reply frame received for 5 I0522 11:44:16.607478 6 log.go:172] (0xc0015ec790) Data frame received for 3 I0522 11:44:16.607552 6 log.go:172] (0xc0014fef00) (3) Data frame handling I0522 11:44:16.607628 6 log.go:172] (0xc0014fef00) (3) Data frame sent I0522 11:44:16.607875 6 log.go:172] (0xc0015ec790) Data frame received for 5 I0522 11:44:16.607909 6 log.go:172] (0xc0014fefa0) (5) Data frame handling I0522 11:44:16.607937 6 log.go:172] (0xc0015ec790) Data frame received for 3 I0522 11:44:16.607950 6 log.go:172] (0xc0014fef00) (3) Data frame handling I0522 11:44:16.610322 6 log.go:172] (0xc0015ec790) Data frame received for 1 I0522 11:44:16.610354 6 log.go:172] (0xc000fffa40) (1) Data frame handling I0522 11:44:16.610381 6 log.go:172] (0xc000fffa40) (1) Data frame sent I0522 11:44:16.610404 6 log.go:172] (0xc0015ec790) (0xc000fffa40) Stream removed, broadcasting: 1 I0522 11:44:16.610472 6 log.go:172] (0xc0015ec790) Go away received I0522 11:44:16.610512 6 log.go:172] (0xc0015ec790) (0xc000fffa40) Stream removed, broadcasting: 1 I0522 11:44:16.610536 6 log.go:172] (0xc0015ec790) (0xc0014fef00) Stream removed, broadcasting: 3 I0522 11:44:16.610555 6 log.go:172] (0xc0015ec790) (0xc0014fefa0) Stream removed, broadcasting: 5 May 22 11:44:16.610: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:44:16.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-qhqbc" for this suite. May 22 11:44:38.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:44:38.670: INFO: namespace: e2e-tests-pod-network-test-qhqbc, resource: bindings, ignored listing per whitelist May 22 11:44:38.713: INFO: namespace e2e-tests-pod-network-test-qhqbc deletion completed in 22.098019298s • [SLOW TEST:46.805 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:44:38.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9sjv8 May 22 11:44:44.827: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9sjv8 STEP: checking the pod's current state and verifying that restartCount is present May 22 11:44:44.830: INFO: Initial restart count of pod liveness-exec is 0 May 22 11:45:36.971: INFO: Restart count of pod e2e-tests-container-probe-9sjv8/liveness-exec is now 1 (52.140393594s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:45:37.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9sjv8" for this suite. May 22 11:45:43.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:45:43.178: INFO: namespace: e2e-tests-container-probe-9sjv8, resource: bindings, ignored listing per whitelist May 22 11:45:43.218: INFO: namespace e2e-tests-container-probe-9sjv8 deletion completed in 6.092371601s • [SLOW TEST:64.505 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:45:43.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 22 11:45:43.925: INFO: created pod pod-service-account-defaultsa May 22 11:45:43.925: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 22 11:45:43.970: INFO: created pod pod-service-account-mountsa May 22 11:45:43.970: INFO: pod pod-service-account-mountsa service account token volume mount: true May 22 11:45:43.983: INFO: created pod pod-service-account-nomountsa May 22 11:45:43.983: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 22 11:45:44.008: INFO: created pod pod-service-account-defaultsa-mountspec May 22 11:45:44.008: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 22 11:45:44.024: INFO: created pod pod-service-account-mountsa-mountspec May 22 11:45:44.024: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 22 11:45:44.061: INFO: created pod pod-service-account-nomountsa-mountspec May 22 11:45:44.061: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 22 11:45:44.102: INFO: created pod pod-service-account-defaultsa-nomountspec May 22 11:45:44.102: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 22 11:45:44.134: INFO: created pod pod-service-account-mountsa-nomountspec May 22 11:45:44.134: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 22 11:45:44.174: INFO: created pod pod-service-account-nomountsa-nomountspec May 22 11:45:44.174: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:45:44.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-bcvfj" for this suite. May 22 11:46:16.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:46:16.339: INFO: namespace: e2e-tests-svcaccounts-bcvfj, resource: bindings, ignored listing per whitelist May 22 11:46:16.382: INFO: namespace e2e-tests-svcaccounts-bcvfj deletion completed in 32.141686805s • [SLOW TEST:33.164 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:46:16.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 22 11:46:16.731: INFO: Waiting up to 5m0s for pod "pod-d862a3b4-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-mqpnw" to be "success or failure" May 22 11:46:16.734: INFO: Pod "pod-d862a3b4-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661063ms May 22 11:46:18.862: INFO: Pod "pod-d862a3b4-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13172904s May 22 11:46:21.042: INFO: Pod "pod-d862a3b4-9c21-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.311305822s May 22 11:46:23.059: INFO: Pod "pod-d862a3b4-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328586466s STEP: Saw pod success May 22 11:46:23.059: INFO: Pod "pod-d862a3b4-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:46:23.062: INFO: Trying to get logs from node hunter-worker pod pod-d862a3b4-9c21-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:46:23.097: INFO: Waiting for pod pod-d862a3b4-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:46:23.126: INFO: Pod pod-d862a3b4-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:46:23.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mqpnw" for this suite. May 22 11:46:29.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:46:29.156: INFO: namespace: e2e-tests-emptydir-mqpnw, resource: bindings, ignored listing per whitelist May 22 11:46:29.215: INFO: namespace e2e-tests-emptydir-mqpnw deletion completed in 6.085550167s • [SLOW TEST:12.833 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:46:29.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 22 11:46:29.337: INFO: Waiting up to 5m0s for pod "downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-hfqdg" to be "success or failure" May 22 11:46:29.341: INFO: Pod "downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021081ms May 22 11:46:31.345: INFO: Pod "downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007817195s May 22 11:46:33.348: INFO: Pod "downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010781151s STEP: Saw pod success May 22 11:46:33.348: INFO: Pod "downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:46:33.351: INFO: Trying to get logs from node hunter-worker pod downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 11:46:33.409: INFO: Waiting for pod downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:46:33.437: INFO: Pod downward-api-dff17a29-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:46:33.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hfqdg" for this suite. May 22 11:46:39.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:46:39.534: INFO: namespace: e2e-tests-downward-api-hfqdg, resource: bindings, ignored listing per whitelist May 22 11:46:39.554: INFO: namespace e2e-tests-downward-api-hfqdg deletion completed in 6.11372157s • [SLOW TEST:10.338 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:46:39.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 22 11:46:39.685: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q86pq,SelfLink:/api/v1/namespaces/e2e-tests-watch-q86pq/configmaps/e2e-watch-test-watch-closed,UID:e61dafdd-9c21-11ea-99e8-0242ac110002,ResourceVersion:11920214,Generation:0,CreationTimestamp:2020-05-22 11:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 11:46:39.685: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q86pq,SelfLink:/api/v1/namespaces/e2e-tests-watch-q86pq/configmaps/e2e-watch-test-watch-closed,UID:e61dafdd-9c21-11ea-99e8-0242ac110002,ResourceVersion:11920215,Generation:0,CreationTimestamp:2020-05-22 11:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 22 11:46:39.746: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q86pq,SelfLink:/api/v1/namespaces/e2e-tests-watch-q86pq/configmaps/e2e-watch-test-watch-closed,UID:e61dafdd-9c21-11ea-99e8-0242ac110002,ResourceVersion:11920216,Generation:0,CreationTimestamp:2020-05-22 11:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 11:46:39.746: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q86pq,SelfLink:/api/v1/namespaces/e2e-tests-watch-q86pq/configmaps/e2e-watch-test-watch-closed,UID:e61dafdd-9c21-11ea-99e8-0242ac110002,ResourceVersion:11920217,Generation:0,CreationTimestamp:2020-05-22 11:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:46:39.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-q86pq" for this suite. May 22 11:46:45.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:46:45.811: INFO: namespace: e2e-tests-watch-q86pq, resource: bindings, ignored listing per whitelist May 22 11:46:45.843: INFO: namespace e2e-tests-watch-q86pq deletion completed in 6.087166035s • [SLOW TEST:6.289 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:46:45.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-zjxh8 May 22 11:46:50.068: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-zjxh8 STEP: checking the pod's current state and verifying that restartCount is present May 22 11:46:50.071: INFO: Initial restart count of pod liveness-http is 0 May 22 11:47:16.128: INFO: Restart count of pod e2e-tests-container-probe-zjxh8/liveness-http is now 1 (26.056890175s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:47:16.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-zjxh8" for this suite. May 22 11:47:22.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:47:22.234: INFO: namespace: e2e-tests-container-probe-zjxh8, resource: bindings, ignored listing per whitelist May 22 11:47:22.246: INFO: namespace e2e-tests-container-probe-zjxh8 deletion completed in 6.08870213s • [SLOW TEST:36.403 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:47:22.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 22 11:47:22.543: INFO: Waiting up to 5m0s for pod "downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-hwgqr" to be "success or failure" May 22 11:47:22.551: INFO: Pod "downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.984628ms May 22 11:47:24.618: INFO: Pod "downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074637519s May 22 11:47:26.622: INFO: Pod "downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079327138s STEP: Saw pod success May 22 11:47:26.622: INFO: Pod "downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:47:26.625: INFO: Trying to get logs from node hunter-worker pod downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 11:47:26.643: INFO: Waiting for pod downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018 to disappear May 22 11:47:26.647: INFO: Pod downward-api-ffa218fc-9c21-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:47:26.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hwgqr" for this suite. May 22 11:47:32.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:47:32.698: INFO: namespace: e2e-tests-downward-api-hwgqr, resource: bindings, ignored listing per whitelist May 22 11:47:32.796: INFO: namespace e2e-tests-downward-api-hwgqr deletion completed in 6.146253227s • [SLOW TEST:10.550 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:47:32.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 22 11:47:32.908: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix603755061/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:47:32.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t5n6l" for this suite. May 22 11:47:39.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:47:39.089: INFO: namespace: e2e-tests-kubectl-t5n6l, resource: bindings, ignored listing per whitelist May 22 11:47:39.112: INFO: namespace e2e-tests-kubectl-t5n6l deletion completed in 6.104610364s • [SLOW TEST:6.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:47:39.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:47:39.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-ndnml" to be "success or failure" May 22 11:47:39.356: INFO: Pod "downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 47.626842ms May 22 11:47:41.359: INFO: Pod "downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051329498s May 22 11:47:43.364: INFO: Pod "downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055694132s STEP: Saw pod success May 22 11:47:43.364: INFO: Pod "downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:47:43.367: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:47:43.409: INFO: Waiting for pod downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:47:43.427: INFO: Pod downwardapi-volume-09a7e1a4-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:47:43.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ndnml" for this suite. May 22 11:47:49.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:47:49.520: INFO: namespace: e2e-tests-projected-ndnml, resource: bindings, ignored listing per whitelist May 22 11:47:49.543: INFO: namespace e2e-tests-projected-ndnml deletion completed in 6.112816062s • [SLOW TEST:10.431 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:47:49.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:47:49.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-g29mw" to be "success or failure" May 22 11:47:49.661: INFO: Pod "downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.326497ms May 22 11:47:51.738: INFO: Pod "downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091578989s May 22 11:47:53.743: INFO: Pod "downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.096239181s May 22 11:47:55.746: INFO: Pod "downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100021153s STEP: Saw pod success May 22 11:47:55.746: INFO: Pod "downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:47:55.749: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:47:55.787: INFO: Waiting for pod downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:47:55.803: INFO: Pod downwardapi-volume-0fd11e0a-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:47:55.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g29mw" for this suite. May 22 11:48:01.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:48:01.891: INFO: namespace: e2e-tests-downward-api-g29mw, resource: bindings, ignored listing per whitelist May 22 11:48:01.926: INFO: namespace e2e-tests-downward-api-g29mw deletion completed in 6.119195836s • [SLOW TEST:12.383 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:48:01.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:48:02.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-cfrgc" to be "success or failure" May 22 11:48:02.047: INFO: Pod "downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.849084ms May 22 11:48:04.098: INFO: Pod "downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081577104s May 22 11:48:06.101: INFO: Pod "downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085481873s May 22 11:48:08.107: INFO: Pod "downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090891494s STEP: Saw pod success May 22 11:48:08.107: INFO: Pod "downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:48:08.110: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:48:08.176: INFO: Waiting for pod downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:48:08.181: INFO: Pod downwardapi-volume-1730c10c-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:48:08.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cfrgc" for this suite. May 22 11:48:14.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:48:14.250: INFO: namespace: e2e-tests-projected-cfrgc, resource: bindings, ignored listing per whitelist May 22 11:48:14.278: INFO: namespace e2e-tests-projected-cfrgc deletion completed in 6.09397588s • [SLOW TEST:12.352 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:48:14.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:48:14.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-jkq5r" to be "success or failure" May 22 11:48:14.402: INFO: Pod "downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.833603ms May 22 11:48:16.406: INFO: Pod "downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031026095s May 22 11:48:18.410: INFO: Pod "downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034997279s STEP: Saw pod success May 22 11:48:18.410: INFO: Pod "downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:48:18.414: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:48:18.592: INFO: Waiting for pod downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:48:18.702: INFO: Pod downwardapi-volume-1e8d06da-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:48:18.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jkq5r" for this suite. May 22 11:48:24.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:48:24.764: INFO: namespace: e2e-tests-downward-api-jkq5r, resource: bindings, ignored listing per whitelist May 22 11:48:24.812: INFO: namespace e2e-tests-downward-api-jkq5r deletion completed in 6.105934908s • [SLOW TEST:10.534 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:48:24.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 22 11:48:24.897: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 11:48:24.915: INFO: Waiting for terminating namespaces to be deleted... May 22 11:48:24.918: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 22 11:48:24.922: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 22 11:48:24.922: INFO: Container kube-proxy ready: true, restart count 0 May 22 11:48:24.922: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 11:48:24.922: INFO: Container kindnet-cni ready: true, restart count 0 May 22 11:48:24.922: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 22 11:48:24.922: INFO: Container coredns ready: true, restart count 0 May 22 11:48:24.922: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 22 11:48:24.927: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 11:48:24.927: INFO: Container kindnet-cni ready: true, restart count 0 May 22 11:48:24.927: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 22 11:48:24.927: INFO: Container coredns ready: true, restart count 0 May 22 11:48:24.927: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 11:48:24.927: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16115746c7294b25], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:48:25.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-b5xqf" for this suite. May 22 11:48:31.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:48:31.993: INFO: namespace: e2e-tests-sched-pred-b5xqf, resource: bindings, ignored listing per whitelist May 22 11:48:32.034: INFO: namespace e2e-tests-sched-pred-b5xqf deletion completed in 6.087033097s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.221 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:48:32.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 22 11:48:32.143: INFO: namespace e2e-tests-kubectl-w5j8w May 22 11:48:32.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w5j8w' May 22 11:48:32.400: INFO: stderr: "" May 22 11:48:32.400: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 22 11:48:33.405: INFO: Selector matched 1 pods for map[app:redis] May 22 11:48:33.405: INFO: Found 0 / 1 May 22 11:48:34.404: INFO: Selector matched 1 pods for map[app:redis] May 22 11:48:34.405: INFO: Found 0 / 1 May 22 11:48:35.416: INFO: Selector matched 1 pods for map[app:redis] May 22 11:48:35.416: INFO: Found 0 / 1 May 22 11:48:36.404: INFO: Selector matched 1 pods for map[app:redis] May 22 11:48:36.404: INFO: Found 1 / 1 May 22 11:48:36.404: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 22 11:48:36.407: INFO: Selector matched 1 pods for map[app:redis] May 22 11:48:36.408: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 22 11:48:36.408: INFO: wait on redis-master startup in e2e-tests-kubectl-w5j8w May 22 11:48:36.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kzbmm redis-master --namespace=e2e-tests-kubectl-w5j8w' May 22 11:48:36.533: INFO: stderr: "" May 22 11:48:36.533: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 May 11:48:35.366 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 May 11:48:35.366 # Server started, Redis version 3.2.12\n1:M 22 May 11:48:35.366 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 May 11:48:35.366 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 22 11:48:36.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-w5j8w' May 22 11:48:36.670: INFO: stderr: "" May 22 11:48:36.670: INFO: stdout: "service/rm2 exposed\n" May 22 11:48:36.691: INFO: Service rm2 in namespace e2e-tests-kubectl-w5j8w found. STEP: exposing service May 22 11:48:38.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-w5j8w' May 22 11:48:38.823: INFO: stderr: "" May 22 11:48:38.823: INFO: stdout: "service/rm3 exposed\n" May 22 11:48:38.864: INFO: Service rm3 in namespace e2e-tests-kubectl-w5j8w found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:48:40.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w5j8w" for this suite. May 22 11:49:02.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:49:03.037: INFO: namespace: e2e-tests-kubectl-w5j8w, resource: bindings, ignored listing per whitelist May 22 11:49:03.037: INFO: namespace e2e-tests-kubectl-w5j8w deletion completed in 22.16378043s • [SLOW TEST:31.003 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:49:03.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-d5kq7 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 22 11:49:03.219: INFO: Found 0 stateful pods, waiting for 3 May 22 11:49:13.223: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 11:49:13.224: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 11:49:13.224: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 22 11:49:23.224: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 11:49:23.224: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 11:49:23.224: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 22 11:49:23.249: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 22 11:49:33.308: INFO: Updating stateful set ss2 May 22 11:49:33.318: INFO: Waiting for Pod e2e-tests-statefulset-d5kq7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 22 11:49:43.435: INFO: Found 2 stateful pods, waiting for 3 May 22 11:49:53.440: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 11:49:53.440: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 11:49:53.440: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 22 11:49:53.463: INFO: Updating stateful set ss2 May 22 11:49:53.474: INFO: Waiting for Pod e2e-tests-statefulset-d5kq7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 22 11:50:03.501: INFO: Updating stateful set ss2 May 22 11:50:03.516: INFO: Waiting for StatefulSet e2e-tests-statefulset-d5kq7/ss2 to complete update May 22 11:50:03.517: INFO: Waiting for Pod e2e-tests-statefulset-d5kq7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 22 11:50:13.522: INFO: Waiting for StatefulSet e2e-tests-statefulset-d5kq7/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 22 11:50:23.524: INFO: Deleting all statefulset in ns e2e-tests-statefulset-d5kq7 May 22 11:50:23.526: INFO: Scaling statefulset ss2 to 0 May 22 11:50:43.543: INFO: Waiting for statefulset status.replicas updated to 0 May 22 11:50:43.546: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:50:43.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-d5kq7" for this suite. May 22 11:50:49.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:50:49.594: INFO: namespace: e2e-tests-statefulset-d5kq7, resource: bindings, ignored listing per whitelist May 22 11:50:49.674: INFO: namespace e2e-tests-statefulset-d5kq7 deletion completed in 6.111341193s • [SLOW TEST:106.637 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:50:49.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:50:49.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-cxdbs" to be "success or failure" May 22 11:50:49.802: INFO: Pod "downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.844553ms May 22 11:50:51.806: INFO: Pod "downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008004091s May 22 11:50:53.811: INFO: Pod "downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012327557s STEP: Saw pod success May 22 11:50:53.811: INFO: Pod "downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:50:53.814: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:50:53.856: INFO: Waiting for pod downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:50:53.867: INFO: Pod downwardapi-volume-7b30b5ee-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:50:53.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cxdbs" for this suite. May 22 11:50:59.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:50:59.985: INFO: namespace: e2e-tests-downward-api-cxdbs, resource: bindings, ignored listing per whitelist May 22 11:51:00.011: INFO: namespace e2e-tests-downward-api-cxdbs deletion completed in 6.141239169s • [SLOW TEST:10.337 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:51:00.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:51:04.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-zcgll" for this suite. May 22 11:51:10.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:51:10.324: INFO: namespace: e2e-tests-emptydir-wrapper-zcgll, resource: bindings, ignored listing per whitelist May 22 11:51:10.337: INFO: namespace e2e-tests-emptydir-wrapper-zcgll deletion completed in 6.103833627s • [SLOW TEST:10.326 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:51:10.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:51:14.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pdg7f" for this suite. May 22 11:51:54.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:51:54.560: INFO: namespace: e2e-tests-kubelet-test-pdg7f, resource: bindings, ignored listing per whitelist May 22 11:51:54.571: INFO: namespace e2e-tests-kubelet-test-pdg7f deletion completed in 40.099768641s • [SLOW TEST:44.233 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:51:54.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-mb7pv.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mb7pv.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mb7pv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-mb7pv.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mb7pv.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mb7pv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 11:52:02.812: INFO: DNS probes using e2e-tests-dns-mb7pv/dns-test-a1db5b4c-9c22-11ea-8e9c-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:52:02.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-mb7pv" for this suite. May 22 11:52:08.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:52:09.039: INFO: namespace: e2e-tests-dns-mb7pv, resource: bindings, ignored listing per whitelist May 22 11:52:09.055: INFO: namespace e2e-tests-dns-mb7pv deletion completed in 6.16323321s • [SLOW TEST:14.484 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:52:09.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 11:52:09.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rxmhp' May 22 11:52:13.053: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 11:52:13.053: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 22 11:52:15.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-rxmhp' May 22 11:52:15.199: INFO: stderr: "" May 22 11:52:15.199: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:52:15.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rxmhp" for this suite. May 22 11:52:21.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:52:21.337: INFO: namespace: e2e-tests-kubectl-rxmhp, resource: bindings, ignored listing per whitelist May 22 11:52:21.383: INFO: namespace e2e-tests-kubectl-rxmhp deletion completed in 6.178638308s • [SLOW TEST:12.328 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:52:21.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:52:21.482: INFO: Creating ReplicaSet my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018 May 22 11:52:21.507: INFO: Pod name my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018: Found 0 pods out of 1 May 22 11:52:26.510: INFO: Pod name my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018: Found 1 pods out of 1 May 22 11:52:26.510: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018" is running May 22 11:52:26.512: INFO: Pod "my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018-dg7q6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 11:52:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 11:52:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 11:52:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 11:52:21 +0000 UTC Reason: Message:}]) May 22 11:52:26.512: INFO: Trying to dial the pod May 22 11:52:31.522: INFO: Controller my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018: Got expected result from replica 1 [my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018-dg7q6]: "my-hostname-basic-b1d8ecbe-9c22-11ea-8e9c-0242ac110018-dg7q6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:52:31.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-955sj" for this suite. May 22 11:52:37.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:52:37.665: INFO: namespace: e2e-tests-replicaset-955sj, resource: bindings, ignored listing per whitelist May 22 11:52:37.712: INFO: namespace e2e-tests-replicaset-955sj deletion completed in 6.186690996s • [SLOW TEST:16.329 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:52:37.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 11:52:38.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-nnp6t" to be "success or failure" May 22 11:52:39.235: INFO: Pod "downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 538.976025ms May 22 11:52:41.240: INFO: Pod "downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543629264s May 22 11:52:43.244: INFO: Pod "downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.548074383s May 22 11:52:45.248: INFO: Pod "downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.551946436s STEP: Saw pod success May 22 11:52:45.248: INFO: Pod "downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:52:45.252: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 11:52:45.274: INFO: Waiting for pod downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:52:45.278: INFO: Pod downwardapi-volume-bbf81742-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:52:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nnp6t" for this suite. May 22 11:52:51.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:52:51.323: INFO: namespace: e2e-tests-downward-api-nnp6t, resource: bindings, ignored listing per whitelist May 22 11:52:51.367: INFO: namespace e2e-tests-downward-api-nnp6t deletion completed in 6.086714065s • [SLOW TEST:13.655 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:52:51.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 22 11:52:51.490: INFO: Waiting up to 5m0s for pod "pod-c3b99950-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-4j9lt" to be "success or failure" May 22 11:52:51.500: INFO: Pod "pod-c3b99950-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14033ms May 22 11:52:53.504: INFO: Pod "pod-c3b99950-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014138706s May 22 11:52:55.509: INFO: Pod "pod-c3b99950-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018344476s STEP: Saw pod success May 22 11:52:55.509: INFO: Pod "pod-c3b99950-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:52:55.512: INFO: Trying to get logs from node hunter-worker2 pod pod-c3b99950-9c22-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:52:55.606: INFO: Waiting for pod pod-c3b99950-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:52:55.642: INFO: Pod pod-c3b99950-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:52:55.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4j9lt" for this suite. May 22 11:53:01.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:53:01.719: INFO: namespace: e2e-tests-emptydir-4j9lt, resource: bindings, ignored listing per whitelist May 22 11:53:01.765: INFO: namespace e2e-tests-emptydir-4j9lt deletion completed in 6.110014767s • [SLOW TEST:10.398 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:53:01.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 22 11:53:01.974: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-kvrxf" to be "success or failure" May 22 11:53:02.013: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 38.839607ms May 22 11:53:04.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043217152s May 22 11:53:06.036: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061506994s May 22 11:53:08.040: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065973787s STEP: Saw pod success May 22 11:53:08.040: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 22 11:53:08.044: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 22 11:53:08.097: INFO: Waiting for pod pod-host-path-test to disappear May 22 11:53:08.103: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:53:08.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-kvrxf" for this suite. May 22 11:53:14.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:53:14.264: INFO: namespace: e2e-tests-hostpath-kvrxf, resource: bindings, ignored listing per whitelist May 22 11:53:14.273: INFO: namespace e2e-tests-hostpath-kvrxf deletion completed in 6.16726689s • [SLOW TEST:12.508 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:53:14.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-d17a0547-9c22-11ea-8e9c-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-d17a0515-9c22-11ea-8e9c-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 22 11:53:14.683: INFO: Waiting up to 5m0s for pod "projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-d5zk8" to be "success or failure" May 22 11:53:14.958: INFO: Pod "projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 274.760304ms May 22 11:53:16.961: INFO: Pod "projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278030254s May 22 11:53:18.964: INFO: Pod "projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.281160026s May 22 11:53:21.225: INFO: Pod "projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.541657714s STEP: Saw pod success May 22 11:53:21.225: INFO: Pod "projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:53:21.227: INFO: Trying to get logs from node hunter-worker pod projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 22 11:53:21.252: INFO: Waiting for pod projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:53:21.255: INFO: Pod projected-volume-d17a0453-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:53:21.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d5zk8" for this suite. May 22 11:53:27.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:53:27.309: INFO: namespace: e2e-tests-projected-d5zk8, resource: bindings, ignored listing per whitelist May 22 11:53:27.343: INFO: namespace e2e-tests-projected-d5zk8 deletion completed in 6.084521919s • [SLOW TEST:13.070 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:53:27.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 22 11:53:27.487: INFO: Waiting up to 5m0s for pod "pod-d92a65b5-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-mmw4q" to be "success or failure" May 22 11:53:27.493: INFO: Pod "pod-d92a65b5-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239484ms May 22 11:53:29.505: INFO: Pod "pod-d92a65b5-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018658889s May 22 11:53:31.523: INFO: Pod "pod-d92a65b5-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036223931s STEP: Saw pod success May 22 11:53:31.523: INFO: Pod "pod-d92a65b5-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:53:31.525: INFO: Trying to get logs from node hunter-worker2 pod pod-d92a65b5-9c22-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:53:31.598: INFO: Waiting for pod pod-d92a65b5-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:53:31.614: INFO: Pod pod-d92a65b5-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:53:31.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mmw4q" for this suite. May 22 11:53:37.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:53:37.677: INFO: namespace: e2e-tests-emptydir-mmw4q, resource: bindings, ignored listing per whitelist May 22 11:53:37.720: INFO: namespace e2e-tests-emptydir-mmw4q deletion completed in 6.102079626s • [SLOW TEST:10.377 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:53:37.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 11:53:37.797: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:53:42.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8nsl5" for this suite. May 22 11:54:22.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:54:22.033: INFO: namespace: e2e-tests-pods-8nsl5, resource: bindings, ignored listing per whitelist May 22 11:54:22.133: INFO: namespace e2e-tests-pods-8nsl5 deletion completed in 40.117713802s • [SLOW TEST:44.413 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:54:22.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-f9e305c4-9c22-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 11:54:22.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-jkhk4" to be "success or failure" May 22 11:54:22.393: INFO: Pod "pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331632ms May 22 11:54:24.474: INFO: Pod "pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089628612s May 22 11:54:26.478: INFO: Pod "pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093333604s STEP: Saw pod success May 22 11:54:26.478: INFO: Pod "pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:54:26.480: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 22 11:54:26.537: INFO: Waiting for pod pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018 to disappear May 22 11:54:26.563: INFO: Pod pod-projected-secrets-f9e815f2-9c22-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:54:26.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jkhk4" for this suite. May 22 11:54:32.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:54:32.656: INFO: namespace: e2e-tests-projected-jkhk4, resource: bindings, ignored listing per whitelist May 22 11:54:32.735: INFO: namespace e2e-tests-projected-jkhk4 deletion completed in 6.168503535s • [SLOW TEST:10.602 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:54:32.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 22 11:54:32.871: INFO: Waiting up to 5m0s for pod "pod-0023f342-9c23-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-jnrmf" to be "success or failure" May 22 11:54:32.886: INFO: Pod "pod-0023f342-9c23-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.393878ms May 22 11:54:34.890: INFO: Pod "pod-0023f342-9c23-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019291866s May 22 11:54:36.894: INFO: Pod "pod-0023f342-9c23-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.023812591s May 22 11:54:38.899: INFO: Pod "pod-0023f342-9c23-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028720299s STEP: Saw pod success May 22 11:54:38.899: INFO: Pod "pod-0023f342-9c23-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:54:38.903: INFO: Trying to get logs from node hunter-worker pod pod-0023f342-9c23-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:54:38.922: INFO: Waiting for pod pod-0023f342-9c23-11ea-8e9c-0242ac110018 to disappear May 22 11:54:38.938: INFO: Pod pod-0023f342-9c23-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:54:38.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jnrmf" for this suite. May 22 11:54:44.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:54:44.991: INFO: namespace: e2e-tests-emptydir-jnrmf, resource: bindings, ignored listing per whitelist May 22 11:54:45.023: INFO: namespace e2e-tests-emptydir-jnrmf deletion completed in 6.082836513s • [SLOW TEST:12.288 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:54:45.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:55:20.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-xsvtn" for this suite. May 22 11:55:26.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:55:26.338: INFO: namespace: e2e-tests-container-runtime-xsvtn, resource: bindings, ignored listing per whitelist May 22 11:55:26.402: INFO: namespace e2e-tests-container-runtime-xsvtn deletion completed in 6.086321077s • [SLOW TEST:41.378 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:55:26.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 11:55:26.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qjlgn' May 22 11:55:26.601: INFO: stderr: "" May 22 11:55:26.601: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 22 11:55:26.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qjlgn' May 22 11:55:30.319: INFO: stderr: "" May 22 11:55:30.319: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:55:30.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qjlgn" for this suite. May 22 11:55:36.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:55:36.543: INFO: namespace: e2e-tests-kubectl-qjlgn, resource: bindings, ignored listing per whitelist May 22 11:55:36.570: INFO: namespace e2e-tests-kubectl-qjlgn deletion completed in 6.123945186s • [SLOW TEST:10.168 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:55:36.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 22 11:55:36.708: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qn26s,SelfLink:/api/v1/namespaces/e2e-tests-watch-qn26s/configmaps/e2e-watch-test-label-changed,UID:262faad9-9c23-11ea-99e8-0242ac110002,ResourceVersion:11922212,Generation:0,CreationTimestamp:2020-05-22 11:55:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 11:55:36.708: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qn26s,SelfLink:/api/v1/namespaces/e2e-tests-watch-qn26s/configmaps/e2e-watch-test-label-changed,UID:262faad9-9c23-11ea-99e8-0242ac110002,ResourceVersion:11922213,Generation:0,CreationTimestamp:2020-05-22 11:55:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 22 11:55:36.708: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qn26s,SelfLink:/api/v1/namespaces/e2e-tests-watch-qn26s/configmaps/e2e-watch-test-label-changed,UID:262faad9-9c23-11ea-99e8-0242ac110002,ResourceVersion:11922214,Generation:0,CreationTimestamp:2020-05-22 11:55:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 22 11:55:46.854: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qn26s,SelfLink:/api/v1/namespaces/e2e-tests-watch-qn26s/configmaps/e2e-watch-test-label-changed,UID:262faad9-9c23-11ea-99e8-0242ac110002,ResourceVersion:11922235,Generation:0,CreationTimestamp:2020-05-22 11:55:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 11:55:46.854: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qn26s,SelfLink:/api/v1/namespaces/e2e-tests-watch-qn26s/configmaps/e2e-watch-test-label-changed,UID:262faad9-9c23-11ea-99e8-0242ac110002,ResourceVersion:11922236,Generation:0,CreationTimestamp:2020-05-22 11:55:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 22 11:55:46.854: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qn26s,SelfLink:/api/v1/namespaces/e2e-tests-watch-qn26s/configmaps/e2e-watch-test-label-changed,UID:262faad9-9c23-11ea-99e8-0242ac110002,ResourceVersion:11922237,Generation:0,CreationTimestamp:2020-05-22 11:55:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:55:46.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-qn26s" for this suite. May 22 11:55:52.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:55:52.921: INFO: namespace: e2e-tests-watch-qn26s, resource: bindings, ignored listing per whitelist May 22 11:55:52.948: INFO: namespace e2e-tests-watch-qn26s deletion completed in 6.085990207s • [SLOW TEST:16.377 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:55:52.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 22 11:55:59.572: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2fefc781-9c23-11ea-8e9c-0242ac110018" May 22 11:55:59.572: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2fefc781-9c23-11ea-8e9c-0242ac110018" in namespace "e2e-tests-pods-nm8k2" to be "terminated due to deadline exceeded" May 22 11:55:59.588: INFO: Pod "pod-update-activedeadlineseconds-2fefc781-9c23-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 15.663394ms May 22 11:56:01.592: INFO: Pod "pod-update-activedeadlineseconds-2fefc781-9c23-11ea-8e9c-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020027882s May 22 11:56:01.592: INFO: Pod "pod-update-activedeadlineseconds-2fefc781-9c23-11ea-8e9c-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:56:01.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nm8k2" for this suite. May 22 11:56:07.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:56:07.688: INFO: namespace: e2e-tests-pods-nm8k2, resource: bindings, ignored listing per whitelist May 22 11:56:07.690: INFO: namespace e2e-tests-pods-nm8k2 deletion completed in 6.094290475s • [SLOW TEST:14.742 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:56:07.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:56:14.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-6bvrp" for this suite. May 22 11:56:36.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:56:36.848: INFO: namespace: e2e-tests-replication-controller-6bvrp, resource: bindings, ignored listing per whitelist May 22 11:56:36.908: INFO: namespace e2e-tests-replication-controller-6bvrp deletion completed in 22.090708468s • [SLOW TEST:29.218 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:56:36.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 22 11:56:37.029: INFO: Waiting up to 5m0s for pod "pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-dmdm7" to be "success or failure" May 22 11:56:37.033: INFO: Pod "pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.958545ms May 22 11:56:39.038: INFO: Pod "pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009026681s May 22 11:56:41.041: INFO: Pod "pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012763405s May 22 11:56:43.045: INFO: Pod "pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015985905s STEP: Saw pod success May 22 11:56:43.045: INFO: Pod "pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 11:56:43.047: INFO: Trying to get logs from node hunter-worker pod pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 11:56:43.077: INFO: Waiting for pod pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018 to disappear May 22 11:56:43.111: INFO: Pod pod-4a27dcf6-9c23-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:56:43.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dmdm7" for this suite. May 22 11:56:49.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:56:49.248: INFO: namespace: e2e-tests-emptydir-dmdm7, resource: bindings, ignored listing per whitelist May 22 11:56:49.400: INFO: namespace e2e-tests-emptydir-dmdm7 deletion completed in 6.28487802s • [SLOW TEST:12.492 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:56:49.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-t2w45 May 22 11:56:55.865: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-t2w45 STEP: checking the pod's current state and verifying that restartCount is present May 22 11:56:55.868: INFO: Initial restart count of pod liveness-http is 0 May 22 11:57:13.906: INFO: Restart count of pod e2e-tests-container-probe-t2w45/liveness-http is now 1 (18.038419856s elapsed) May 22 11:57:34.102: INFO: Restart count of pod e2e-tests-container-probe-t2w45/liveness-http is now 2 (38.23414953s elapsed) May 22 11:57:54.174: INFO: Restart count of pod e2e-tests-container-probe-t2w45/liveness-http is now 3 (58.306942814s elapsed) May 22 11:58:14.302: INFO: Restart count of pod e2e-tests-container-probe-t2w45/liveness-http is now 4 (1m18.434171385s elapsed) May 22 11:59:14.430: INFO: Restart count of pod e2e-tests-container-probe-t2w45/liveness-http is now 5 (2m18.562875948s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:59:14.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-t2w45" for this suite. May 22 11:59:20.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:59:20.523: INFO: namespace: e2e-tests-container-probe-t2w45, resource: bindings, ignored listing per whitelist May 22 11:59:20.528: INFO: namespace e2e-tests-container-probe-t2w45 deletion completed in 6.082192351s • [SLOW TEST:151.127 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:59:20.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 22 11:59:20.635: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:59:28.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-f8f5x" for this suite. May 22 11:59:34.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:59:34.694: INFO: namespace: e2e-tests-init-container-f8f5x, resource: bindings, ignored listing per whitelist May 22 11:59:34.704: INFO: namespace e2e-tests-init-container-f8f5x deletion completed in 6.099162637s • [SLOW TEST:14.176 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:59:34.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 22 11:59:39.007: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-b4251982-9c23-11ea-8e9c-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-qvsmr", SelfLink:"/api/v1/namespaces/e2e-tests-pods-qvsmr/pods/pod-submit-remove-b4251982-9c23-11ea-8e9c-0242ac110018", UID:"b43827f1-9c23-11ea-99e8-0242ac110002", ResourceVersion:"11922876", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725745574, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"834014708"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nxc7f", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020cc080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxc7f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f4d158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019e89c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f4d260)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f4d280)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000f4d288), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000f4d28c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725745574, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725745578, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725745578, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725745574, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.10", StartTime:(*v1.Time)(0xc00230d580), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00230d5e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://2a5a4229febeb25d63881885b379e18b3954fe9fa6e61a6009864177356ea231"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 11:59:51.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qvsmr" for this suite. May 22 11:59:57.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 11:59:57.362: INFO: namespace: e2e-tests-pods-qvsmr, resource: bindings, ignored listing per whitelist May 22 11:59:57.382: INFO: namespace e2e-tests-pods-qvsmr deletion completed in 6.113157216s • [SLOW TEST:22.678 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 11:59:57.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 22 11:59:57.493: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:59:57.495: INFO: Number of nodes with available pods: 0 May 22 11:59:57.495: INFO: Node hunter-worker is running more than one daemon pod May 22 11:59:58.499: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:59:58.501: INFO: Number of nodes with available pods: 0 May 22 11:59:58.501: INFO: Node hunter-worker is running more than one daemon pod May 22 11:59:59.499: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 11:59:59.502: INFO: Number of nodes with available pods: 0 May 22 11:59:59.502: INFO: Node hunter-worker is running more than one daemon pod May 22 12:00:00.530: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:00:00.533: INFO: Number of nodes with available pods: 0 May 22 12:00:00.533: INFO: Node hunter-worker is running more than one daemon pod May 22 12:00:01.501: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:00:01.505: INFO: Number of nodes with available pods: 0 May 22 12:00:01.505: INFO: Node hunter-worker is running more than one daemon pod May 22 12:00:02.499: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:00:02.503: INFO: Number of nodes with available pods: 2 May 22 12:00:02.503: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 22 12:00:02.536: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:00:02.553: INFO: Number of nodes with available pods: 2 May 22 12:00:02.553: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bcf7p, will wait for the garbage collector to delete the pods May 22 12:00:03.625: INFO: Deleting DaemonSet.extensions daemon-set took: 7.14451ms May 22 12:00:03.726: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.245656ms May 22 12:00:11.829: INFO: Number of nodes with available pods: 0 May 22 12:00:11.829: INFO: Number of running nodes: 0, number of available pods: 0 May 22 12:00:11.832: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bcf7p/daemonsets","resourceVersion":"11923006"},"items":null} May 22 12:00:11.835: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bcf7p/pods","resourceVersion":"11923006"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:00:11.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bcf7p" for this suite. May 22 12:00:17.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:00:17.929: INFO: namespace: e2e-tests-daemonsets-bcf7p, resource: bindings, ignored listing per whitelist May 22 12:00:17.932: INFO: namespace e2e-tests-daemonsets-bcf7p deletion completed in 6.085109598s • [SLOW TEST:20.550 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:00:17.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 22 12:00:18.858: INFO: Waiting up to 5m0s for pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l" in namespace "e2e-tests-svcaccounts-xqrrm" to be "success or failure" May 22 12:00:18.864: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.145375ms May 22 12:00:20.868: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126304s May 22 12:00:23.168: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30931231s May 22 12:00:25.172: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312974042s May 22 12:00:27.175: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.316813478s STEP: Saw pod success May 22 12:00:27.175: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l" satisfied condition "success or failure" May 22 12:00:27.179: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l container token-test: STEP: delete the pod May 22 12:00:27.259: INFO: Waiting for pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l to disappear May 22 12:00:27.271: INFO: Pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-9552l no longer exists STEP: Creating a pod to test consume service account root CA May 22 12:00:27.274: INFO: Waiting up to 5m0s for pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45" in namespace "e2e-tests-svcaccounts-xqrrm" to be "success or failure" May 22 12:00:27.277: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.911816ms May 22 12:00:29.282: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444585s May 22 12:00:31.377: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102945982s May 22 12:00:33.382: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45": Phase="Running", Reason="", readiness=false. Elapsed: 6.107340575s May 22 12:00:35.386: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111669589s STEP: Saw pod success May 22 12:00:35.386: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45" satisfied condition "success or failure" May 22 12:00:35.389: INFO: Trying to get logs from node hunter-worker pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45 container root-ca-test: STEP: delete the pod May 22 12:00:35.426: INFO: Waiting for pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45 to disappear May 22 12:00:35.455: INFO: Pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-4jg45 no longer exists STEP: Creating a pod to test consume service account namespace May 22 12:00:35.458: INFO: Waiting up to 5m0s for pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn" in namespace "e2e-tests-svcaccounts-xqrrm" to be "success or failure" May 22 12:00:35.472: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.626819ms May 22 12:00:37.475: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016746619s May 22 12:00:39.582: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123180904s May 22 12:00:41.586: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127063741s May 22 12:00:43.590: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131276139s STEP: Saw pod success May 22 12:00:43.590: INFO: Pod "pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn" satisfied condition "success or failure" May 22 12:00:43.592: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn container namespace-test: STEP: delete the pod May 22 12:00:43.645: INFO: Waiting for pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn to disappear May 22 12:00:43.649: INFO: Pod pod-service-account-ce61c137-9c23-11ea-8e9c-0242ac110018-f9fqn no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:00:43.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-xqrrm" for this suite. May 22 12:00:49.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:00:49.762: INFO: namespace: e2e-tests-svcaccounts-xqrrm, resource: bindings, ignored listing per whitelist May 22 12:00:49.776: INFO: namespace e2e-tests-svcaccounts-xqrrm deletion completed in 6.098034859s • [SLOW TEST:31.843 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:00:49.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 22 12:00:49.860: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 12:00:49.897: INFO: Waiting for terminating namespaces to be deleted... May 22 12:00:49.900: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 22 12:00:49.905: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 22 12:00:49.905: INFO: Container kube-proxy ready: true, restart count 0 May 22 12:00:49.905: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 12:00:49.905: INFO: Container kindnet-cni ready: true, restart count 0 May 22 12:00:49.905: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 22 12:00:49.905: INFO: Container coredns ready: true, restart count 0 May 22 12:00:49.905: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 22 12:00:49.910: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 12:00:49.910: INFO: Container kindnet-cni ready: true, restart count 0 May 22 12:00:49.910: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 22 12:00:49.910: INFO: Container coredns ready: true, restart count 0 May 22 12:00:49.910: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 22 12:00:49.910: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 22 12:00:49.974: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 22 12:00:49.974: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 22 12:00:49.974: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 22 12:00:49.974: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 22 12:00:49.974: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 22 12:00:49.974: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ee9b6b-9c23-11ea-8e9c-0242ac110018.161157f44211497f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-ch4qf/filler-pod-e0ee9b6b-9c23-11ea-8e9c-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ee9b6b-9c23-11ea-8e9c-0242ac110018.161157f4d4461cf8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ee9b6b-9c23-11ea-8e9c-0242ac110018.161157f5245637e3], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ee9b6b-9c23-11ea-8e9c-0242ac110018.161157f536381ea4], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ef9c53-9c23-11ea-8e9c-0242ac110018.161157f442f232e4], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-ch4qf/filler-pod-e0ef9c53-9c23-11ea-8e9c-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ef9c53-9c23-11ea-8e9c-0242ac110018.161157f49373f7a8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ef9c53-9c23-11ea-8e9c-0242ac110018.161157f4fb8517aa], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e0ef9c53-9c23-11ea-8e9c-0242ac110018.161157f51f583c6d], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.161157f5a8e4ec1f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:00:57.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-ch4qf" for this suite. May 22 12:01:03.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:01:03.943: INFO: namespace: e2e-tests-sched-pred-ch4qf, resource: bindings, ignored listing per whitelist May 22 12:01:04.083: INFO: namespace e2e-tests-sched-pred-ch4qf deletion completed in 6.400496155s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:14.306 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:01:04.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-pq4j STEP: Creating a pod to test atomic-volume-subpath May 22 12:01:04.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pq4j" in namespace "e2e-tests-subpath-p9q6l" to be "success or failure" May 22 12:01:04.239: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Pending", Reason="", readiness=false. Elapsed: 18.958441ms May 22 12:01:06.243: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023394586s May 22 12:01:08.248: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027995992s May 22 12:01:10.252: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0319523s May 22 12:01:12.256: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 8.036311369s May 22 12:01:14.259: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 10.039522258s May 22 12:01:16.264: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 12.043807363s May 22 12:01:18.268: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 14.048040439s May 22 12:01:20.272: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 16.052283392s May 22 12:01:22.276: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 18.056247295s May 22 12:01:24.280: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 20.059989453s May 22 12:01:26.284: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 22.063762564s May 22 12:01:28.289: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 24.069112484s May 22 12:01:30.360: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Running", Reason="", readiness=false. Elapsed: 26.140082295s May 22 12:01:32.364: INFO: Pod "pod-subpath-test-projected-pq4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.143940472s STEP: Saw pod success May 22 12:01:32.364: INFO: Pod "pod-subpath-test-projected-pq4j" satisfied condition "success or failure" May 22 12:01:32.366: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-pq4j container test-container-subpath-projected-pq4j: STEP: delete the pod May 22 12:01:32.518: INFO: Waiting for pod pod-subpath-test-projected-pq4j to disappear May 22 12:01:32.526: INFO: Pod pod-subpath-test-projected-pq4j no longer exists STEP: Deleting pod pod-subpath-test-projected-pq4j May 22 12:01:32.526: INFO: Deleting pod "pod-subpath-test-projected-pq4j" in namespace "e2e-tests-subpath-p9q6l" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:01:32.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-p9q6l" for this suite. May 22 12:01:38.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:01:38.626: INFO: namespace: e2e-tests-subpath-p9q6l, resource: bindings, ignored listing per whitelist May 22 12:01:38.684: INFO: namespace e2e-tests-subpath-p9q6l deletion completed in 6.153572046s • [SLOW TEST:34.602 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:01:38.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-x6cqx in namespace e2e-tests-proxy-9nwq9 I0522 12:01:38.831296 6 runners.go:184] Created replication controller with name: proxy-service-x6cqx, namespace: e2e-tests-proxy-9nwq9, replica count: 1 I0522 12:01:39.881728 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 12:01:40.881950 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 12:01:41.882165 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 12:01:42.882441 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:43.882623 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:44.882812 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:45.882991 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:46.883281 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:47.883518 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:48.883756 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 12:01:49.883971 6 runners.go:184] proxy-service-x6cqx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 22 12:01:49.887: INFO: setup took 11.104305911s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 22 12:01:49.894: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9nwq9/pods/proxy-service-x6cqx-gzk6d:162/proxy/: bar (200; 6.62954ms) May 22 12:01:49.909: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9nwq9/pods/http:proxy-service-x6cqx-gzk6d:160/proxy/: foo (200; 21.728333ms) May 22 12:01:49.909: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9nwq9/services/http:proxy-service-x6cqx:portname1/proxy/: foo (200; 21.714155ms) May 22 12:01:49.909: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9nwq9/pods/proxy-service-x6cqx-gzk6d/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:02:35.988: INFO: Container started at 2020-05-22 12:02:10 +0000 UTC, pod became ready at 2020-05-22 12:02:34 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:02:35.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2t7pw" for this suite. May 22 12:02:58.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:02:58.079: INFO: namespace: e2e-tests-container-probe-2t7pw, resource: bindings, ignored listing per whitelist May 22 12:02:58.100: INFO: namespace e2e-tests-container-probe-2t7pw deletion completed in 22.108199068s • [SLOW TEST:50.216 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:02:58.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 12:02:58.443: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-9m9tr" to be "success or failure" May 22 12:02:58.456: INFO: Pod "downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.614037ms May 22 12:03:00.460: INFO: Pod "downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016538814s May 22 12:03:02.464: INFO: Pod "downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020815291s STEP: Saw pod success May 22 12:03:02.464: INFO: Pod "downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:03:02.467: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 12:03:02.639: INFO: Waiting for pod downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018 to disappear May 22 12:03:02.641: INFO: Pod downwardapi-volume-2d62b5ff-9c24-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:03:02.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9m9tr" for this suite. May 22 12:03:08.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:03:08.723: INFO: namespace: e2e-tests-projected-9m9tr, resource: bindings, ignored listing per whitelist May 22 12:03:08.779: INFO: namespace e2e-tests-projected-9m9tr deletion completed in 6.135037417s • [SLOW TEST:10.679 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:03:08.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 22 12:03:20.959: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:20.959: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:20.996512 6 log.go:172] (0xc001af0420) (0xc001c71a40) Create stream I0522 12:03:20.996544 6 log.go:172] (0xc001af0420) (0xc001c71a40) Stream added, broadcasting: 1 I0522 12:03:20.999719 6 log.go:172] (0xc001af0420) Reply frame received for 1 I0522 12:03:20.999786 6 log.go:172] (0xc001af0420) (0xc0022f8fa0) Create stream I0522 12:03:20.999805 6 log.go:172] (0xc001af0420) (0xc0022f8fa0) Stream added, broadcasting: 3 I0522 12:03:21.000891 6 log.go:172] (0xc001af0420) Reply frame received for 3 I0522 12:03:21.000934 6 log.go:172] (0xc001af0420) (0xc001253860) Create stream I0522 12:03:21.000947 6 log.go:172] (0xc001af0420) (0xc001253860) Stream added, broadcasting: 5 I0522 12:03:21.002081 6 log.go:172] (0xc001af0420) Reply frame received for 5 I0522 12:03:21.085856 6 log.go:172] (0xc001af0420) Data frame received for 3 I0522 12:03:21.085893 6 log.go:172] (0xc0022f8fa0) (3) Data frame handling I0522 12:03:21.085918 6 log.go:172] (0xc0022f8fa0) (3) Data frame sent I0522 12:03:21.085931 6 log.go:172] (0xc001af0420) Data frame received for 3 I0522 12:03:21.085940 6 log.go:172] (0xc0022f8fa0) (3) Data frame handling I0522 12:03:21.086514 6 log.go:172] (0xc001af0420) Data frame received for 5 I0522 12:03:21.086548 6 log.go:172] (0xc001253860) (5) Data frame handling I0522 12:03:21.087746 6 log.go:172] (0xc001af0420) Data frame received for 1 I0522 12:03:21.087760 6 log.go:172] (0xc001c71a40) (1) Data frame handling I0522 12:03:21.087779 6 log.go:172] (0xc001c71a40) (1) Data frame sent I0522 12:03:21.087793 6 log.go:172] (0xc001af0420) (0xc001c71a40) Stream removed, broadcasting: 1 I0522 12:03:21.087913 6 log.go:172] (0xc001af0420) Go away received I0522 12:03:21.087959 6 log.go:172] (0xc001af0420) (0xc001c71a40) Stream removed, broadcasting: 1 I0522 12:03:21.087987 6 log.go:172] (0xc001af0420) (0xc0022f8fa0) Stream removed, broadcasting: 3 I0522 12:03:21.088009 6 log.go:172] (0xc001af0420) (0xc001253860) Stream removed, broadcasting: 5 May 22 12:03:21.088: INFO: Exec stderr: "" May 22 12:03:21.088: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.088: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.134895 6 log.go:172] (0xc000526bb0) (0xc0022f92c0) Create stream I0522 12:03:21.134932 6 log.go:172] (0xc000526bb0) (0xc0022f92c0) Stream added, broadcasting: 1 I0522 12:03:21.136934 6 log.go:172] (0xc000526bb0) Reply frame received for 1 I0522 12:03:21.136975 6 log.go:172] (0xc000526bb0) (0xc001253900) Create stream I0522 12:03:21.136986 6 log.go:172] (0xc000526bb0) (0xc001253900) Stream added, broadcasting: 3 I0522 12:03:21.138283 6 log.go:172] (0xc000526bb0) Reply frame received for 3 I0522 12:03:21.138315 6 log.go:172] (0xc000526bb0) (0xc001fee000) Create stream I0522 12:03:21.138327 6 log.go:172] (0xc000526bb0) (0xc001fee000) Stream added, broadcasting: 5 I0522 12:03:21.139306 6 log.go:172] (0xc000526bb0) Reply frame received for 5 I0522 12:03:21.207158 6 log.go:172] (0xc000526bb0) Data frame received for 5 I0522 12:03:21.207210 6 log.go:172] (0xc001fee000) (5) Data frame handling I0522 12:03:21.207244 6 log.go:172] (0xc000526bb0) Data frame received for 3 I0522 12:03:21.207301 6 log.go:172] (0xc001253900) (3) Data frame handling I0522 12:03:21.207341 6 log.go:172] (0xc001253900) (3) Data frame sent I0522 12:03:21.207363 6 log.go:172] (0xc000526bb0) Data frame received for 3 I0522 12:03:21.207388 6 log.go:172] (0xc001253900) (3) Data frame handling I0522 12:03:21.208241 6 log.go:172] (0xc000526bb0) Data frame received for 1 I0522 12:03:21.208272 6 log.go:172] (0xc0022f92c0) (1) Data frame handling I0522 12:03:21.208297 6 log.go:172] (0xc0022f92c0) (1) Data frame sent I0522 12:03:21.208321 6 log.go:172] (0xc000526bb0) (0xc0022f92c0) Stream removed, broadcasting: 1 I0522 12:03:21.208339 6 log.go:172] (0xc000526bb0) Go away received I0522 12:03:21.208660 6 log.go:172] (0xc000526bb0) (0xc0022f92c0) Stream removed, broadcasting: 1 I0522 12:03:21.208680 6 log.go:172] (0xc000526bb0) (0xc001253900) Stream removed, broadcasting: 3 I0522 12:03:21.208695 6 log.go:172] (0xc000526bb0) (0xc001fee000) Stream removed, broadcasting: 5 May 22 12:03:21.208: INFO: Exec stderr: "" May 22 12:03:21.208: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.208: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.233860 6 log.go:172] (0xc0021960b0) (0xc001c3eaa0) Create stream I0522 12:03:21.233888 6 log.go:172] (0xc0021960b0) (0xc001c3eaa0) Stream added, broadcasting: 1 I0522 12:03:21.235631 6 log.go:172] (0xc0021960b0) Reply frame received for 1 I0522 12:03:21.235685 6 log.go:172] (0xc0021960b0) (0xc001fee0a0) Create stream I0522 12:03:21.235695 6 log.go:172] (0xc0021960b0) (0xc001fee0a0) Stream added, broadcasting: 3 I0522 12:03:21.236616 6 log.go:172] (0xc0021960b0) Reply frame received for 3 I0522 12:03:21.236655 6 log.go:172] (0xc0021960b0) (0xc0022f9400) Create stream I0522 12:03:21.236668 6 log.go:172] (0xc0021960b0) (0xc0022f9400) Stream added, broadcasting: 5 I0522 12:03:21.237803 6 log.go:172] (0xc0021960b0) Reply frame received for 5 I0522 12:03:21.299140 6 log.go:172] (0xc0021960b0) Data frame received for 3 I0522 12:03:21.299177 6 log.go:172] (0xc001fee0a0) (3) Data frame handling I0522 12:03:21.299198 6 log.go:172] (0xc001fee0a0) (3) Data frame sent I0522 12:03:21.299211 6 log.go:172] (0xc0021960b0) Data frame received for 3 I0522 12:03:21.299224 6 log.go:172] (0xc001fee0a0) (3) Data frame handling I0522 12:03:21.299478 6 log.go:172] (0xc0021960b0) Data frame received for 5 I0522 12:03:21.299515 6 log.go:172] (0xc0022f9400) (5) Data frame handling I0522 12:03:21.301246 6 log.go:172] (0xc0021960b0) Data frame received for 1 I0522 12:03:21.301279 6 log.go:172] (0xc001c3eaa0) (1) Data frame handling I0522 12:03:21.301288 6 log.go:172] (0xc001c3eaa0) (1) Data frame sent I0522 12:03:21.301296 6 log.go:172] (0xc0021960b0) (0xc001c3eaa0) Stream removed, broadcasting: 1 I0522 12:03:21.301363 6 log.go:172] (0xc0021960b0) (0xc001c3eaa0) Stream removed, broadcasting: 1 I0522 12:03:21.301369 6 log.go:172] (0xc0021960b0) (0xc001fee0a0) Stream removed, broadcasting: 3 I0522 12:03:21.301374 6 log.go:172] (0xc0021960b0) (0xc0022f9400) Stream removed, broadcasting: 5 May 22 12:03:21.301: INFO: Exec stderr: "" I0522 12:03:21.301392 6 log.go:172] (0xc0021960b0) Go away received May 22 12:03:21.301: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.301: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.332036 6 log.go:172] (0xc0016d62c0) (0xc001fee320) Create stream I0522 12:03:21.332078 6 log.go:172] (0xc0016d62c0) (0xc001fee320) Stream added, broadcasting: 1 I0522 12:03:21.334661 6 log.go:172] (0xc0016d62c0) Reply frame received for 1 I0522 12:03:21.334712 6 log.go:172] (0xc0016d62c0) (0xc0022f94a0) Create stream I0522 12:03:21.334728 6 log.go:172] (0xc0016d62c0) (0xc0022f94a0) Stream added, broadcasting: 3 I0522 12:03:21.335608 6 log.go:172] (0xc0016d62c0) Reply frame received for 3 I0522 12:03:21.335643 6 log.go:172] (0xc0016d62c0) (0xc001c3ebe0) Create stream I0522 12:03:21.335653 6 log.go:172] (0xc0016d62c0) (0xc001c3ebe0) Stream added, broadcasting: 5 I0522 12:03:21.336595 6 log.go:172] (0xc0016d62c0) Reply frame received for 5 I0522 12:03:21.398934 6 log.go:172] (0xc0016d62c0) Data frame received for 5 I0522 12:03:21.398968 6 log.go:172] (0xc001c3ebe0) (5) Data frame handling I0522 12:03:21.399027 6 log.go:172] (0xc0016d62c0) Data frame received for 3 I0522 12:03:21.399064 6 log.go:172] (0xc0022f94a0) (3) Data frame handling I0522 12:03:21.399083 6 log.go:172] (0xc0022f94a0) (3) Data frame sent I0522 12:03:21.399095 6 log.go:172] (0xc0016d62c0) Data frame received for 3 I0522 12:03:21.399104 6 log.go:172] (0xc0022f94a0) (3) Data frame handling I0522 12:03:21.400765 6 log.go:172] (0xc0016d62c0) Data frame received for 1 I0522 12:03:21.400802 6 log.go:172] (0xc001fee320) (1) Data frame handling I0522 12:03:21.400830 6 log.go:172] (0xc001fee320) (1) Data frame sent I0522 12:03:21.400859 6 log.go:172] (0xc0016d62c0) (0xc001fee320) Stream removed, broadcasting: 1 I0522 12:03:21.400881 6 log.go:172] (0xc0016d62c0) Go away received I0522 12:03:21.400997 6 log.go:172] (0xc0016d62c0) (0xc001fee320) Stream removed, broadcasting: 1 I0522 12:03:21.401018 6 log.go:172] (0xc0016d62c0) (0xc0022f94a0) Stream removed, broadcasting: 3 I0522 12:03:21.401027 6 log.go:172] (0xc0016d62c0) (0xc001c3ebe0) Stream removed, broadcasting: 5 May 22 12:03:21.401: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 22 12:03:21.401: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.401: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.429559 6 log.go:172] (0xc002196630) (0xc001c3ee60) Create stream I0522 12:03:21.429583 6 log.go:172] (0xc002196630) (0xc001c3ee60) Stream added, broadcasting: 1 I0522 12:03:21.431743 6 log.go:172] (0xc002196630) Reply frame received for 1 I0522 12:03:21.431806 6 log.go:172] (0xc002196630) (0xc001253b80) Create stream I0522 12:03:21.431836 6 log.go:172] (0xc002196630) (0xc001253b80) Stream added, broadcasting: 3 I0522 12:03:21.433098 6 log.go:172] (0xc002196630) Reply frame received for 3 I0522 12:03:21.433331 6 log.go:172] (0xc002196630) (0xc0022f9540) Create stream I0522 12:03:21.433359 6 log.go:172] (0xc002196630) (0xc0022f9540) Stream added, broadcasting: 5 I0522 12:03:21.434536 6 log.go:172] (0xc002196630) Reply frame received for 5 I0522 12:03:21.489461 6 log.go:172] (0xc002196630) Data frame received for 3 I0522 12:03:21.489635 6 log.go:172] (0xc001253b80) (3) Data frame handling I0522 12:03:21.489677 6 log.go:172] (0xc001253b80) (3) Data frame sent I0522 12:03:21.489695 6 log.go:172] (0xc002196630) Data frame received for 3 I0522 12:03:21.489709 6 log.go:172] (0xc001253b80) (3) Data frame handling I0522 12:03:21.489928 6 log.go:172] (0xc002196630) Data frame received for 5 I0522 12:03:21.489950 6 log.go:172] (0xc0022f9540) (5) Data frame handling I0522 12:03:21.491613 6 log.go:172] (0xc002196630) Data frame received for 1 I0522 12:03:21.491645 6 log.go:172] (0xc001c3ee60) (1) Data frame handling I0522 12:03:21.491661 6 log.go:172] (0xc001c3ee60) (1) Data frame sent I0522 12:03:21.491673 6 log.go:172] (0xc002196630) (0xc001c3ee60) Stream removed, broadcasting: 1 I0522 12:03:21.491688 6 log.go:172] (0xc002196630) Go away received I0522 12:03:21.491794 6 log.go:172] (0xc002196630) (0xc001c3ee60) Stream removed, broadcasting: 1 I0522 12:03:21.491816 6 log.go:172] (0xc002196630) (0xc001253b80) Stream removed, broadcasting: 3 I0522 12:03:21.491830 6 log.go:172] (0xc002196630) (0xc0022f9540) Stream removed, broadcasting: 5 May 22 12:03:21.491: INFO: Exec stderr: "" May 22 12:03:21.491: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.491: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.516994 6 log.go:172] (0xc001af08f0) (0xc001c71e00) Create stream I0522 12:03:21.517043 6 log.go:172] (0xc001af08f0) (0xc001c71e00) Stream added, broadcasting: 1 I0522 12:03:21.519362 6 log.go:172] (0xc001af08f0) Reply frame received for 1 I0522 12:03:21.519399 6 log.go:172] (0xc001af08f0) (0xc0022f95e0) Create stream I0522 12:03:21.519410 6 log.go:172] (0xc001af08f0) (0xc0022f95e0) Stream added, broadcasting: 3 I0522 12:03:21.520244 6 log.go:172] (0xc001af08f0) Reply frame received for 3 I0522 12:03:21.520278 6 log.go:172] (0xc001af08f0) (0xc001c3ef00) Create stream I0522 12:03:21.520288 6 log.go:172] (0xc001af08f0) (0xc001c3ef00) Stream added, broadcasting: 5 I0522 12:03:21.521103 6 log.go:172] (0xc001af08f0) Reply frame received for 5 I0522 12:03:21.576268 6 log.go:172] (0xc001af08f0) Data frame received for 5 I0522 12:03:21.576295 6 log.go:172] (0xc001c3ef00) (5) Data frame handling I0522 12:03:21.576666 6 log.go:172] (0xc001af08f0) Data frame received for 3 I0522 12:03:21.576697 6 log.go:172] (0xc0022f95e0) (3) Data frame handling I0522 12:03:21.576717 6 log.go:172] (0xc0022f95e0) (3) Data frame sent I0522 12:03:21.576727 6 log.go:172] (0xc001af08f0) Data frame received for 3 I0522 12:03:21.576732 6 log.go:172] (0xc0022f95e0) (3) Data frame handling I0522 12:03:21.578711 6 log.go:172] (0xc001af08f0) Data frame received for 1 I0522 12:03:21.578735 6 log.go:172] (0xc001c71e00) (1) Data frame handling I0522 12:03:21.578763 6 log.go:172] (0xc001c71e00) (1) Data frame sent I0522 12:03:21.578784 6 log.go:172] (0xc001af08f0) (0xc001c71e00) Stream removed, broadcasting: 1 I0522 12:03:21.578807 6 log.go:172] (0xc001af08f0) Go away received I0522 12:03:21.578953 6 log.go:172] (0xc001af08f0) (0xc001c71e00) Stream removed, broadcasting: 1 I0522 12:03:21.578981 6 log.go:172] (0xc001af08f0) (0xc0022f95e0) Stream removed, broadcasting: 3 I0522 12:03:21.579000 6 log.go:172] (0xc001af08f0) (0xc001c3ef00) Stream removed, broadcasting: 5 May 22 12:03:21.579: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 22 12:03:21.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.579: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.643321 6 log.go:172] (0xc000527080) (0xc0022f9860) Create stream I0522 12:03:21.643349 6 log.go:172] (0xc000527080) (0xc0022f9860) Stream added, broadcasting: 1 I0522 12:03:21.645364 6 log.go:172] (0xc000527080) Reply frame received for 1 I0522 12:03:21.645418 6 log.go:172] (0xc000527080) (0xc0022f99a0) Create stream I0522 12:03:21.645430 6 log.go:172] (0xc000527080) (0xc0022f99a0) Stream added, broadcasting: 3 I0522 12:03:21.646268 6 log.go:172] (0xc000527080) Reply frame received for 3 I0522 12:03:21.646312 6 log.go:172] (0xc000527080) (0xc001c3efa0) Create stream I0522 12:03:21.646326 6 log.go:172] (0xc000527080) (0xc001c3efa0) Stream added, broadcasting: 5 I0522 12:03:21.647261 6 log.go:172] (0xc000527080) Reply frame received for 5 I0522 12:03:21.718391 6 log.go:172] (0xc000527080) Data frame received for 5 I0522 12:03:21.718430 6 log.go:172] (0xc001c3efa0) (5) Data frame handling I0522 12:03:21.718463 6 log.go:172] (0xc000527080) Data frame received for 3 I0522 12:03:21.718478 6 log.go:172] (0xc0022f99a0) (3) Data frame handling I0522 12:03:21.718496 6 log.go:172] (0xc0022f99a0) (3) Data frame sent I0522 12:03:21.718509 6 log.go:172] (0xc000527080) Data frame received for 3 I0522 12:03:21.718521 6 log.go:172] (0xc0022f99a0) (3) Data frame handling I0522 12:03:21.719933 6 log.go:172] (0xc000527080) Data frame received for 1 I0522 12:03:21.719954 6 log.go:172] (0xc0022f9860) (1) Data frame handling I0522 12:03:21.719970 6 log.go:172] (0xc0022f9860) (1) Data frame sent I0522 12:03:21.719982 6 log.go:172] (0xc000527080) (0xc0022f9860) Stream removed, broadcasting: 1 I0522 12:03:21.720040 6 log.go:172] (0xc000527080) Go away received I0522 12:03:21.720116 6 log.go:172] (0xc000527080) (0xc0022f9860) Stream removed, broadcasting: 1 I0522 12:03:21.720160 6 log.go:172] (0xc000527080) (0xc0022f99a0) Stream removed, broadcasting: 3 I0522 12:03:21.720179 6 log.go:172] (0xc000527080) (0xc001c3efa0) Stream removed, broadcasting: 5 May 22 12:03:21.720: INFO: Exec stderr: "" May 22 12:03:21.720: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.720: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.751912 6 log.go:172] (0xc0016d6790) (0xc001fee500) Create stream I0522 12:03:21.751939 6 log.go:172] (0xc0016d6790) (0xc001fee500) Stream added, broadcasting: 1 I0522 12:03:21.755111 6 log.go:172] (0xc0016d6790) Reply frame received for 1 I0522 12:03:21.755145 6 log.go:172] (0xc0016d6790) (0xc001253cc0) Create stream I0522 12:03:21.755154 6 log.go:172] (0xc0016d6790) (0xc001253cc0) Stream added, broadcasting: 3 I0522 12:03:21.756187 6 log.go:172] (0xc0016d6790) Reply frame received for 3 I0522 12:03:21.756234 6 log.go:172] (0xc0016d6790) (0xc001c71ea0) Create stream I0522 12:03:21.756244 6 log.go:172] (0xc0016d6790) (0xc001c71ea0) Stream added, broadcasting: 5 I0522 12:03:21.757319 6 log.go:172] (0xc0016d6790) Reply frame received for 5 I0522 12:03:21.814198 6 log.go:172] (0xc0016d6790) Data frame received for 5 I0522 12:03:21.814238 6 log.go:172] (0xc001c71ea0) (5) Data frame handling I0522 12:03:21.814265 6 log.go:172] (0xc0016d6790) Data frame received for 3 I0522 12:03:21.814288 6 log.go:172] (0xc001253cc0) (3) Data frame handling I0522 12:03:21.814304 6 log.go:172] (0xc001253cc0) (3) Data frame sent I0522 12:03:21.814312 6 log.go:172] (0xc0016d6790) Data frame received for 3 I0522 12:03:21.814318 6 log.go:172] (0xc001253cc0) (3) Data frame handling I0522 12:03:21.815350 6 log.go:172] (0xc0016d6790) Data frame received for 1 I0522 12:03:21.815366 6 log.go:172] (0xc001fee500) (1) Data frame handling I0522 12:03:21.815375 6 log.go:172] (0xc001fee500) (1) Data frame sent I0522 12:03:21.815386 6 log.go:172] (0xc0016d6790) (0xc001fee500) Stream removed, broadcasting: 1 I0522 12:03:21.815405 6 log.go:172] (0xc0016d6790) Go away received I0522 12:03:21.815540 6 log.go:172] (0xc0016d6790) (0xc001fee500) Stream removed, broadcasting: 1 I0522 12:03:21.815587 6 log.go:172] (0xc0016d6790) (0xc001253cc0) Stream removed, broadcasting: 3 I0522 12:03:21.815609 6 log.go:172] (0xc0016d6790) (0xc001c71ea0) Stream removed, broadcasting: 5 May 22 12:03:21.815: INFO: Exec stderr: "" May 22 12:03:21.815: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.815: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.845683 6 log.go:172] (0xc0016d6c60) (0xc001fee8c0) Create stream I0522 12:03:21.845710 6 log.go:172] (0xc0016d6c60) (0xc001fee8c0) Stream added, broadcasting: 1 I0522 12:03:21.847681 6 log.go:172] (0xc0016d6c60) Reply frame received for 1 I0522 12:03:21.847728 6 log.go:172] (0xc0016d6c60) (0xc001c71f40) Create stream I0522 12:03:21.847743 6 log.go:172] (0xc0016d6c60) (0xc001c71f40) Stream added, broadcasting: 3 I0522 12:03:21.848729 6 log.go:172] (0xc0016d6c60) Reply frame received for 3 I0522 12:03:21.848768 6 log.go:172] (0xc0016d6c60) (0xc001253d60) Create stream I0522 12:03:21.848781 6 log.go:172] (0xc0016d6c60) (0xc001253d60) Stream added, broadcasting: 5 I0522 12:03:21.849961 6 log.go:172] (0xc0016d6c60) Reply frame received for 5 I0522 12:03:21.926446 6 log.go:172] (0xc0016d6c60) Data frame received for 3 I0522 12:03:21.926481 6 log.go:172] (0xc001c71f40) (3) Data frame handling I0522 12:03:21.926494 6 log.go:172] (0xc001c71f40) (3) Data frame sent I0522 12:03:21.926502 6 log.go:172] (0xc0016d6c60) Data frame received for 3 I0522 12:03:21.926515 6 log.go:172] (0xc001c71f40) (3) Data frame handling I0522 12:03:21.926596 6 log.go:172] (0xc0016d6c60) Data frame received for 5 I0522 12:03:21.926645 6 log.go:172] (0xc001253d60) (5) Data frame handling I0522 12:03:21.927898 6 log.go:172] (0xc0016d6c60) Data frame received for 1 I0522 12:03:21.927923 6 log.go:172] (0xc001fee8c0) (1) Data frame handling I0522 12:03:21.927945 6 log.go:172] (0xc001fee8c0) (1) Data frame sent I0522 12:03:21.927969 6 log.go:172] (0xc0016d6c60) (0xc001fee8c0) Stream removed, broadcasting: 1 I0522 12:03:21.928019 6 log.go:172] (0xc0016d6c60) Go away received I0522 12:03:21.928067 6 log.go:172] (0xc0016d6c60) (0xc001fee8c0) Stream removed, broadcasting: 1 I0522 12:03:21.928083 6 log.go:172] (0xc0016d6c60) (0xc001c71f40) Stream removed, broadcasting: 3 I0522 12:03:21.928092 6 log.go:172] (0xc0016d6c60) (0xc001253d60) Stream removed, broadcasting: 5 May 22 12:03:21.928: INFO: Exec stderr: "" May 22 12:03:21.928: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dsf99 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 12:03:21.928: INFO: >>> kubeConfig: /root/.kube/config I0522 12:03:21.959291 6 log.go:172] (0xc000527550) (0xc0022f9d60) Create stream I0522 12:03:21.959316 6 log.go:172] (0xc000527550) (0xc0022f9d60) Stream added, broadcasting: 1 I0522 12:03:21.961873 6 log.go:172] (0xc000527550) Reply frame received for 1 I0522 12:03:21.961936 6 log.go:172] (0xc000527550) (0xc001c3f040) Create stream I0522 12:03:21.961956 6 log.go:172] (0xc000527550) (0xc001c3f040) Stream added, broadcasting: 3 I0522 12:03:21.963192 6 log.go:172] (0xc000527550) Reply frame received for 3 I0522 12:03:21.963264 6 log.go:172] (0xc000527550) (0xc0022f9e00) Create stream I0522 12:03:21.963286 6 log.go:172] (0xc000527550) (0xc0022f9e00) Stream added, broadcasting: 5 I0522 12:03:21.964400 6 log.go:172] (0xc000527550) Reply frame received for 5 I0522 12:03:22.033687 6 log.go:172] (0xc000527550) Data frame received for 5 I0522 12:03:22.033715 6 log.go:172] (0xc0022f9e00) (5) Data frame handling I0522 12:03:22.033739 6 log.go:172] (0xc000527550) Data frame received for 3 I0522 12:03:22.033753 6 log.go:172] (0xc001c3f040) (3) Data frame handling I0522 12:03:22.033764 6 log.go:172] (0xc001c3f040) (3) Data frame sent I0522 12:03:22.033772 6 log.go:172] (0xc000527550) Data frame received for 3 I0522 12:03:22.033776 6 log.go:172] (0xc001c3f040) (3) Data frame handling I0522 12:03:22.034763 6 log.go:172] (0xc000527550) Data frame received for 1 I0522 12:03:22.034781 6 log.go:172] (0xc0022f9d60) (1) Data frame handling I0522 12:03:22.034795 6 log.go:172] (0xc0022f9d60) (1) Data frame sent I0522 12:03:22.035000 6 log.go:172] (0xc000527550) (0xc0022f9d60) Stream removed, broadcasting: 1 I0522 12:03:22.035108 6 log.go:172] (0xc000527550) (0xc0022f9d60) Stream removed, broadcasting: 1 I0522 12:03:22.035123 6 log.go:172] (0xc000527550) (0xc001c3f040) Stream removed, broadcasting: 3 I0522 12:03:22.035173 6 log.go:172] (0xc000527550) Go away received I0522 12:03:22.035311 6 log.go:172] (0xc000527550) (0xc0022f9e00) Stream removed, broadcasting: 5 May 22 12:03:22.035: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:03:22.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-dsf99" for this suite. May 22 12:04:12.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:04:12.066: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-dsf99, resource: bindings, ignored listing per whitelist May 22 12:04:12.135: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-dsf99 deletion completed in 50.096400315s • [SLOW TEST:63.356 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:04:12.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 22 12:04:12.256: INFO: Waiting up to 5m0s for pod "var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018" in namespace "e2e-tests-var-expansion-sx6pg" to be "success or failure" May 22 12:04:12.259: INFO: Pod "var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812293ms May 22 12:04:14.263: INFO: Pod "var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006771865s May 22 12:04:16.267: INFO: Pod "var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010972044s STEP: Saw pod success May 22 12:04:16.267: INFO: Pod "var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:04:16.270: INFO: Trying to get logs from node hunter-worker pod var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 12:04:16.315: INFO: Waiting for pod var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018 to disappear May 22 12:04:16.319: INFO: Pod var-expansion-5978ee26-9c24-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:04:16.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-sx6pg" for this suite. May 22 12:04:22.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:04:22.378: INFO: namespace: e2e-tests-var-expansion-sx6pg, resource: bindings, ignored listing per whitelist May 22 12:04:22.444: INFO: namespace e2e-tests-var-expansion-sx6pg deletion completed in 6.106742155s • [SLOW TEST:10.308 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:04:22.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-rm7cs STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rm7cs to expose endpoints map[] May 22 12:04:22.607: INFO: Get endpoints failed (3.28399ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 22 12:04:23.611: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rm7cs exposes endpoints map[] (1.007517422s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-rm7cs STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rm7cs to expose endpoints map[pod1:[80]] May 22 12:04:27.658: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rm7cs exposes endpoints map[pod1:[80]] (4.040346541s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-rm7cs STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rm7cs to expose endpoints map[pod1:[80] pod2:[80]] May 22 12:04:30.808: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rm7cs exposes endpoints map[pod1:[80] pod2:[80]] (3.145198269s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-rm7cs STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rm7cs to expose endpoints map[pod2:[80]] May 22 12:04:31.872: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rm7cs exposes endpoints map[pod2:[80]] (1.059478505s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-rm7cs STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rm7cs to expose endpoints map[] May 22 12:04:32.966: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rm7cs exposes endpoints map[] (1.089452016s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:04:33.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-rm7cs" for this suite. May 22 12:04:44.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:04:44.143: INFO: namespace: e2e-tests-services-rm7cs, resource: bindings, ignored listing per whitelist May 22 12:04:44.168: INFO: namespace e2e-tests-services-rm7cs deletion completed in 10.70921489s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:21.724 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:04:44.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-b5cxr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-b5cxr to expose endpoints map[] May 22 12:04:44.974: INFO: Get endpoints failed (260.032405ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 22 12:04:45.978: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b5cxr exposes endpoints map[] (1.263849575s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-b5cxr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-b5cxr to expose endpoints map[pod1:[100]] May 22 12:04:50.444: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b5cxr exposes endpoints map[pod1:[100]] (4.460241976s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-b5cxr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-b5cxr to expose endpoints map[pod1:[100] pod2:[101]] May 22 12:04:55.013: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b5cxr exposes endpoints map[pod1:[100] pod2:[101]] (4.565343654s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-b5cxr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-b5cxr to expose endpoints map[pod2:[101]] May 22 12:04:56.480: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b5cxr exposes endpoints map[pod2:[101]] (1.463274994s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-b5cxr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-b5cxr to expose endpoints map[] May 22 12:04:57.558: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-b5cxr exposes endpoints map[] (1.073290382s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:04:57.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-b5cxr" for this suite. May 22 12:05:21.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:05:21.943: INFO: namespace: e2e-tests-services-b5cxr, resource: bindings, ignored listing per whitelist May 22 12:05:22.001: INFO: namespace e2e-tests-services-b5cxr deletion completed in 24.104207685s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:37.833 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:05:22.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 22 12:05:22.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 22 12:05:22.326: INFO: stderr: "" May 22 12:05:22.326: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:05:22.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4tc7q" for this suite. May 22 12:05:28.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:05:28.382: INFO: namespace: e2e-tests-kubectl-4tc7q, resource: bindings, ignored listing per whitelist May 22 12:05:28.434: INFO: namespace e2e-tests-kubectl-4tc7q deletion completed in 6.100158518s • [SLOW TEST:6.433 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:05:28.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:05:29.225: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"874776eb-9c24-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0009a3622), BlockOwnerDeletion:(*bool)(0xc0009a3623)}} May 22 12:05:29.340: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"871f30ab-9c24-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00131f4e2), BlockOwnerDeletion:(*bool)(0xc00131f4e3)}} May 22 12:05:29.370: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"871fb2b6-9c24-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0009a381a), BlockOwnerDeletion:(*bool)(0xc0009a381b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:05:34.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mkxlh" for this suite. May 22 12:05:40.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:05:40.695: INFO: namespace: e2e-tests-gc-mkxlh, resource: bindings, ignored listing per whitelist May 22 12:05:40.736: INFO: namespace e2e-tests-gc-mkxlh deletion completed in 6.080504644s • [SLOW TEST:12.302 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:05:40.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8e4b5fa9-9c24-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume configMaps May 22 12:05:40.856: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018" in namespace "e2e-tests-configmap-jbjbq" to be "success or failure" May 22 12:05:40.916: INFO: Pod "pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 59.313844ms May 22 12:05:43.035: INFO: Pod "pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178318811s May 22 12:05:45.041: INFO: Pod "pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184074217s STEP: Saw pod success May 22 12:05:45.041: INFO: Pod "pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:05:45.044: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 22 12:05:45.089: INFO: Waiting for pod pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018 to disappear May 22 12:05:45.268: INFO: Pod pod-configmaps-8e4da40b-9c24-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:05:45.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jbjbq" for this suite. May 22 12:05:51.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:05:51.373: INFO: namespace: e2e-tests-configmap-jbjbq, resource: bindings, ignored listing per whitelist May 22 12:05:51.470: INFO: namespace e2e-tests-configmap-jbjbq deletion completed in 6.198970464s • [SLOW TEST:10.734 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:05:51.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 22 12:05:51.695: INFO: Waiting up to 5m0s for pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018" in namespace "e2e-tests-var-expansion-xrrbr" to be "success or failure" May 22 12:05:51.717: INFO: Pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.794669ms May 22 12:05:53.720: INFO: Pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025050159s May 22 12:05:55.723: INFO: Pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028217328s May 22 12:05:57.831: INFO: Pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.136034278s May 22 12:05:59.855: INFO: Pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160301566s STEP: Saw pod success May 22 12:05:59.856: INFO: Pod "var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:05:59.860: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 12:05:59.923: INFO: Waiting for pod var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018 to disappear May 22 12:06:00.101: INFO: Pod var-expansion-94bfa8c4-9c24-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:06:00.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xrrbr" for this suite. May 22 12:06:06.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:06:06.467: INFO: namespace: e2e-tests-var-expansion-xrrbr, resource: bindings, ignored listing per whitelist May 22 12:06:06.513: INFO: namespace e2e-tests-var-expansion-xrrbr deletion completed in 6.407494543s • [SLOW TEST:15.042 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:06:06.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 22 12:06:15.718: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:06:17.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-x65fs" for this suite. May 22 12:06:39.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:06:39.074: INFO: namespace: e2e-tests-replicaset-x65fs, resource: bindings, ignored listing per whitelist May 22 12:06:39.134: INFO: namespace e2e-tests-replicaset-x65fs deletion completed in 22.091453475s • [SLOW TEST:32.620 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:06:39.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 22 12:06:39.294: INFO: Waiting up to 5m0s for pod "pod-b123ac80-9c24-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-lwpsp" to be "success or failure" May 22 12:06:39.298: INFO: Pod "pod-b123ac80-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.522153ms May 22 12:06:41.302: INFO: Pod "pod-b123ac80-9c24-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008175655s May 22 12:06:43.306: INFO: Pod "pod-b123ac80-9c24-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01159141s STEP: Saw pod success May 22 12:06:43.306: INFO: Pod "pod-b123ac80-9c24-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:06:43.308: INFO: Trying to get logs from node hunter-worker pod pod-b123ac80-9c24-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 12:06:43.324: INFO: Waiting for pod pod-b123ac80-9c24-11ea-8e9c-0242ac110018 to disappear May 22 12:06:43.344: INFO: Pod pod-b123ac80-9c24-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:06:43.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lwpsp" for this suite. May 22 12:06:49.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:06:49.407: INFO: namespace: e2e-tests-emptydir-lwpsp, resource: bindings, ignored listing per whitelist May 22 12:06:49.474: INFO: namespace e2e-tests-emptydir-lwpsp deletion completed in 6.125976718s • [SLOW TEST:10.340 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:06:49.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-k7zmk STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-k7zmk STEP: Deleting pre-stop pod May 22 12:07:02.657: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:07:02.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-k7zmk" for this suite. May 22 12:07:42.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:07:42.692: INFO: namespace: e2e-tests-prestop-k7zmk, resource: bindings, ignored listing per whitelist May 22 12:07:42.750: INFO: namespace e2e-tests-prestop-k7zmk deletion completed in 40.079397928s • [SLOW TEST:53.275 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:07:42.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d70b3a9e-9c24-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d70b3a9e-9c24-11ea-8e9c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:09:15.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xwx5k" for this suite. May 22 12:09:37.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:09:37.518: INFO: namespace: e2e-tests-projected-xwx5k, resource: bindings, ignored listing per whitelist May 22 12:09:37.571: INFO: namespace e2e-tests-projected-xwx5k deletion completed in 22.078558069s • [SLOW TEST:114.821 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:09:37.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 22 12:09:37.663: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:09:45.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-s2mqn" for this suite. May 22 12:10:07.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:10:07.872: INFO: namespace: e2e-tests-init-container-s2mqn, resource: bindings, ignored listing per whitelist May 22 12:10:07.947: INFO: namespace e2e-tests-init-container-s2mqn deletion completed in 22.115899023s • [SLOW TEST:30.376 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:10:07.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 22 12:10:16.167: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:16.171: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:18.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:18.219: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:20.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:20.176: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:22.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:22.176: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:24.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:24.175: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:26.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:26.176: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:28.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:28.176: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:30.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:30.175: INFO: Pod pod-with-prestop-http-hook still exists May 22 12:10:32.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 12:10:32.176: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:10:32.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7tlbh" for this suite. May 22 12:10:54.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:10:54.316: INFO: namespace: e2e-tests-container-lifecycle-hook-7tlbh, resource: bindings, ignored listing per whitelist May 22 12:10:54.380: INFO: namespace e2e-tests-container-lifecycle-hook-7tlbh deletion completed in 22.194273356s • [SLOW TEST:46.433 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:10:54.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 22 12:10:54.565: INFO: Waiting up to 5m0s for pod "downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-g4cn8" to be "success or failure" May 22 12:10:54.578: INFO: Pod "downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.691391ms May 22 12:10:56.582: INFO: Pod "downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017097921s May 22 12:10:58.587: INFO: Pod "downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022408998s STEP: Saw pod success May 22 12:10:58.587: INFO: Pod "downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:10:58.589: INFO: Trying to get logs from node hunter-worker pod downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 12:10:58.624: INFO: Waiting for pod downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018 to disappear May 22 12:10:58.644: INFO: Pod downward-api-4944c60b-9c25-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:10:58.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g4cn8" for this suite. May 22 12:11:04.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:11:04.759: INFO: namespace: e2e-tests-downward-api-g4cn8, resource: bindings, ignored listing per whitelist May 22 12:11:04.772: INFO: namespace e2e-tests-downward-api-g4cn8 deletion completed in 6.125298744s • [SLOW TEST:10.391 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:11:04.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:11:05.056: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 22 12:11:05.078: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-79xsn/daemonsets","resourceVersion":"11925107"},"items":null} May 22 12:11:05.080: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-79xsn/pods","resourceVersion":"11925107"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:11:05.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-79xsn" for this suite. May 22 12:11:11.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:11:11.165: INFO: namespace: e2e-tests-daemonsets-79xsn, resource: bindings, ignored listing per whitelist May 22 12:11:11.198: INFO: namespace e2e-tests-daemonsets-79xsn deletion completed in 6.107023209s S [SKIPPING] [6.425 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:11:05.056: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:11:11.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 22 12:11:11.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:14.216: INFO: stderr: "" May 22 12:11:14.216: INFO: stdout: "pod/pause created\n" May 22 12:11:14.216: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 22 12:11:14.216: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-rsb7m" to be "running and ready" May 22 12:11:14.222: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.923802ms May 22 12:11:16.540: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3239089s May 22 12:11:18.563: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.346663899s May 22 12:11:18.563: INFO: Pod "pause" satisfied condition "running and ready" May 22 12:11:18.563: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 22 12:11:18.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:18.880: INFO: stderr: "" May 22 12:11:18.880: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 22 12:11:18.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:19.132: INFO: stderr: "" May 22 12:11:19.132: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 22 12:11:19.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:19.582: INFO: stderr: "" May 22 12:11:19.582: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 22 12:11:19.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:19.806: INFO: stderr: "" May 22 12:11:19.806: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 22 12:11:19.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:20.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 12:11:20.357: INFO: stdout: "pod \"pause\" force deleted\n" May 22 12:11:20.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-rsb7m' May 22 12:11:20.456: INFO: stderr: "No resources found.\n" May 22 12:11:20.456: INFO: stdout: "" May 22 12:11:20.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-rsb7m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 12:11:20.543: INFO: stderr: "" May 22 12:11:20.544: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:11:20.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rsb7m" for this suite. May 22 12:11:26.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:11:26.785: INFO: namespace: e2e-tests-kubectl-rsb7m, resource: bindings, ignored listing per whitelist May 22 12:11:26.823: INFO: namespace e2e-tests-kubectl-rsb7m deletion completed in 6.27718079s • [SLOW TEST:15.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:11:26.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 22 12:11:27.000: INFO: Waiting up to 5m0s for pod "client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018" in namespace "e2e-tests-containers-bntch" to be "success or failure" May 22 12:11:27.029: INFO: Pod "client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.459892ms May 22 12:11:29.583: INFO: Pod "client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58329887s May 22 12:11:31.615: INFO: Pod "client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.614856961s May 22 12:11:33.618: INFO: Pod "client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.618211334s STEP: Saw pod success May 22 12:11:33.618: INFO: Pod "client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:11:33.621: INFO: Trying to get logs from node hunter-worker2 pod client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 12:11:34.166: INFO: Waiting for pod client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018 to disappear May 22 12:11:34.169: INFO: Pod client-containers-5c99e587-9c25-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:11:34.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bntch" for this suite. May 22 12:11:40.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:11:40.258: INFO: namespace: e2e-tests-containers-bntch, resource: bindings, ignored listing per whitelist May 22 12:11:40.309: INFO: namespace e2e-tests-containers-bntch deletion completed in 6.136650919s • [SLOW TEST:13.485 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:11:40.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 12:11:40.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-8sgwh" to be "success or failure" May 22 12:11:40.417: INFO: Pod "downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113747ms May 22 12:11:42.421: INFO: Pod "downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007252515s May 22 12:11:44.425: INFO: Pod "downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011111113s STEP: Saw pod success May 22 12:11:44.425: INFO: Pod "downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:11:44.427: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 12:11:44.470: INFO: Waiting for pod downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018 to disappear May 22 12:11:44.492: INFO: Pod downwardapi-volume-649d58c6-9c25-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:11:44.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8sgwh" for this suite. May 22 12:11:50.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:11:50.674: INFO: namespace: e2e-tests-projected-8sgwh, resource: bindings, ignored listing per whitelist May 22 12:11:50.676: INFO: namespace e2e-tests-projected-8sgwh deletion completed in 6.180260794s • [SLOW TEST:10.367 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:11:50.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0522 12:12:00.788877 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 12:12:00.788: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:12:00.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4zxlv" for this suite. May 22 12:12:08.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:12:08.845: INFO: namespace: e2e-tests-gc-4zxlv, resource: bindings, ignored listing per whitelist May 22 12:12:08.883: INFO: namespace e2e-tests-gc-4zxlv deletion completed in 8.091184331s • [SLOW TEST:18.207 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:12:08.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 22 12:12:09.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x87fc' May 22 12:12:10.984: INFO: stderr: "" May 22 12:12:10.984: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 22 12:12:12.400: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:12.400: INFO: Found 0 / 1 May 22 12:12:13.032: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:13.032: INFO: Found 0 / 1 May 22 12:12:14.053: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:14.053: INFO: Found 0 / 1 May 22 12:12:14.987: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:14.987: INFO: Found 0 / 1 May 22 12:12:16.312: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:16.312: INFO: Found 0 / 1 May 22 12:12:17.017: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:17.017: INFO: Found 0 / 1 May 22 12:12:17.988: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:17.988: INFO: Found 0 / 1 May 22 12:12:19.304: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:19.304: INFO: Found 0 / 1 May 22 12:12:19.989: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:19.989: INFO: Found 0 / 1 May 22 12:12:21.101: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:21.101: INFO: Found 1 / 1 May 22 12:12:21.101: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 22 12:12:21.104: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:21.104: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 22 12:12:21.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nzqrc --namespace=e2e-tests-kubectl-x87fc -p {"metadata":{"annotations":{"x":"y"}}}' May 22 12:12:21.530: INFO: stderr: "" May 22 12:12:21.530: INFO: stdout: "pod/redis-master-nzqrc patched\n" STEP: checking annotations May 22 12:12:21.808: INFO: Selector matched 1 pods for map[app:redis] May 22 12:12:21.808: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:12:21.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x87fc" for this suite. May 22 12:12:49.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:12:49.946: INFO: namespace: e2e-tests-kubectl-x87fc, resource: bindings, ignored listing per whitelist May 22 12:12:49.970: INFO: namespace e2e-tests-kubectl-x87fc deletion completed in 28.157899565s • [SLOW TEST:41.087 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:12:49.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 12:12:50.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-tfksv' May 22 12:12:50.695: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 12:12:50.695: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 22 12:12:55.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-tfksv' May 22 12:12:56.115: INFO: stderr: "" May 22 12:12:56.115: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:12:56.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tfksv" for this suite. May 22 12:13:05.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:13:05.391: INFO: namespace: e2e-tests-kubectl-tfksv, resource: bindings, ignored listing per whitelist May 22 12:13:05.395: INFO: namespace e2e-tests-kubectl-tfksv deletion completed in 9.276009208s • [SLOW TEST:15.425 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:13:05.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-mmsj STEP: Creating a pod to test atomic-volume-subpath May 22 12:13:07.521: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mmsj" in namespace "e2e-tests-subpath-d2qwq" to be "success or failure" May 22 12:13:07.629: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 107.504511ms May 22 12:13:10.886: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364794248s May 22 12:13:12.952: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 5.430184237s May 22 12:13:14.964: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.442678768s May 22 12:13:17.779: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.257314839s May 22 12:13:19.879: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.357722759s May 22 12:13:21.883: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.361611748s May 22 12:13:24.362: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.84085561s May 22 12:13:26.455: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.933860179s May 22 12:13:29.006: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 21.484978699s May 22 12:13:31.010: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Pending", Reason="", readiness=false. Elapsed: 23.488800129s May 22 12:13:33.014: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=true. Elapsed: 25.492996136s May 22 12:13:35.017: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 27.495528239s May 22 12:13:37.020: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 29.498335039s May 22 12:13:39.023: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 31.501302116s May 22 12:13:41.026: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 33.504597853s May 22 12:13:43.030: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 35.508732567s May 22 12:13:45.034: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 37.512985179s May 22 12:13:47.038: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Running", Reason="", readiness=false. Elapsed: 39.516751663s May 22 12:13:49.041: INFO: Pod "pod-subpath-test-downwardapi-mmsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.520106404s STEP: Saw pod success May 22 12:13:49.042: INFO: Pod "pod-subpath-test-downwardapi-mmsj" satisfied condition "success or failure" May 22 12:13:49.044: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-mmsj container test-container-subpath-downwardapi-mmsj: STEP: delete the pod May 22 12:13:49.103: INFO: Waiting for pod pod-subpath-test-downwardapi-mmsj to disappear May 22 12:13:49.114: INFO: Pod pod-subpath-test-downwardapi-mmsj no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mmsj May 22 12:13:49.114: INFO: Deleting pod "pod-subpath-test-downwardapi-mmsj" in namespace "e2e-tests-subpath-d2qwq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:13:49.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-d2qwq" for this suite. May 22 12:13:55.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:13:55.237: INFO: namespace: e2e-tests-subpath-d2qwq, resource: bindings, ignored listing per whitelist May 22 12:13:55.253: INFO: namespace e2e-tests-subpath-d2qwq deletion completed in 6.135464854s • [SLOW TEST:49.858 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:13:55.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:13:55.468: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 22 12:13:55.474: INFO: Number of nodes with available pods: 0 May 22 12:13:55.474: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 22 12:13:55.531: INFO: Number of nodes with available pods: 0 May 22 12:13:55.531: INFO: Node hunter-worker is running more than one daemon pod May 22 12:13:56.535: INFO: Number of nodes with available pods: 0 May 22 12:13:56.535: INFO: Node hunter-worker is running more than one daemon pod May 22 12:13:57.535: INFO: Number of nodes with available pods: 0 May 22 12:13:57.535: INFO: Node hunter-worker is running more than one daemon pod May 22 12:13:58.557: INFO: Number of nodes with available pods: 0 May 22 12:13:58.557: INFO: Node hunter-worker is running more than one daemon pod May 22 12:13:59.535: INFO: Number of nodes with available pods: 1 May 22 12:13:59.535: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 22 12:13:59.561: INFO: Number of nodes with available pods: 1 May 22 12:13:59.561: INFO: Number of running nodes: 0, number of available pods: 1 May 22 12:14:00.566: INFO: Number of nodes with available pods: 0 May 22 12:14:00.566: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 22 12:14:00.580: INFO: Number of nodes with available pods: 0 May 22 12:14:00.580: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:01.584: INFO: Number of nodes with available pods: 0 May 22 12:14:01.584: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:02.584: INFO: Number of nodes with available pods: 0 May 22 12:14:02.584: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:03.583: INFO: Number of nodes with available pods: 0 May 22 12:14:03.583: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:04.584: INFO: Number of nodes with available pods: 0 May 22 12:14:04.584: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:05.905: INFO: Number of nodes with available pods: 0 May 22 12:14:05.905: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:06.599: INFO: Number of nodes with available pods: 0 May 22 12:14:06.599: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:07.583: INFO: Number of nodes with available pods: 0 May 22 12:14:07.583: INFO: Node hunter-worker is running more than one daemon pod May 22 12:14:08.584: INFO: Number of nodes with available pods: 1 May 22 12:14:08.584: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-sgg52, will wait for the garbage collector to delete the pods May 22 12:14:08.647: INFO: Deleting DaemonSet.extensions daemon-set took: 5.30561ms May 22 12:14:08.747: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.210463ms May 22 12:14:21.377: INFO: Number of nodes with available pods: 0 May 22 12:14:21.377: INFO: Number of running nodes: 0, number of available pods: 0 May 22 12:14:21.379: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-sgg52/daemonsets","resourceVersion":"11925757"},"items":null} May 22 12:14:21.381: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-sgg52/pods","resourceVersion":"11925757"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:14:21.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-sgg52" for this suite. May 22 12:14:29.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:14:29.485: INFO: namespace: e2e-tests-daemonsets-sgg52, resource: bindings, ignored listing per whitelist May 22 12:14:29.643: INFO: namespace e2e-tests-daemonsets-sgg52 deletion completed in 8.2200538s • [SLOW TEST:34.389 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:14:29.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 22 12:14:40.020: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:40.043: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:42.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:42.047: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:44.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:44.048: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:46.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:46.048: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:48.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:48.048: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:50.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:50.049: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:52.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:52.047: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:54.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:54.048: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:56.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:56.047: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:14:58.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:14:58.048: INFO: Pod pod-with-poststart-exec-hook still exists May 22 12:15:00.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 12:15:00.047: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:15:00.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-24spn" for this suite. May 22 12:15:24.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:15:24.108: INFO: namespace: e2e-tests-container-lifecycle-hook-24spn, resource: bindings, ignored listing per whitelist May 22 12:15:24.155: INFO: namespace e2e-tests-container-lifecycle-hook-24spn deletion completed in 24.103327125s • [SLOW TEST:54.511 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:15:24.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ea0cfc35-9c25-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 12:15:24.278: INFO: Waiting up to 5m0s for pod "pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-md4c8" to be "success or failure" May 22 12:15:24.295: INFO: Pod "pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.317785ms May 22 12:15:26.627: INFO: Pod "pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349439173s May 22 12:15:28.631: INFO: Pod "pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.35342773s May 22 12:15:30.635: INFO: Pod "pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.356900109s STEP: Saw pod success May 22 12:15:30.635: INFO: Pod "pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:15:30.637: INFO: Trying to get logs from node hunter-worker pod pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018 container secret-env-test: STEP: delete the pod May 22 12:15:30.710: INFO: Waiting for pod pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018 to disappear May 22 12:15:30.731: INFO: Pod pod-secrets-ea0d9405-9c25-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:15:30.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-md4c8" for this suite. May 22 12:15:36.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:15:36.816: INFO: namespace: e2e-tests-secrets-md4c8, resource: bindings, ignored listing per whitelist May 22 12:15:36.854: INFO: namespace e2e-tests-secrets-md4c8 deletion completed in 6.118970674s • [SLOW TEST:12.699 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:15:36.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018 May 22 12:15:37.019: INFO: Pod name my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018: Found 0 pods out of 1 May 22 12:15:42.030: INFO: Pod name my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018: Found 1 pods out of 1 May 22 12:15:42.030: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018" are running May 22 12:15:42.032: INFO: Pod "my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018-ltstd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 12:15:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 12:15:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 12:15:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 12:15:37 +0000 UTC Reason: Message:}]) May 22 12:15:42.032: INFO: Trying to dial the pod May 22 12:15:47.058: INFO: Controller my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018: Got expected result from replica 1 [my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018-ltstd]: "my-hostname-basic-f19b8302-9c25-11ea-8e9c-0242ac110018-ltstd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:15:47.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-7ttkc" for this suite. May 22 12:15:53.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:15:53.134: INFO: namespace: e2e-tests-replication-controller-7ttkc, resource: bindings, ignored listing per whitelist May 22 12:15:53.186: INFO: namespace e2e-tests-replication-controller-7ttkc deletion completed in 6.12501801s • [SLOW TEST:16.332 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:15:53.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-fb55e2ef-9c25-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 12:15:53.313: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-kmrbp" to be "success or failure" May 22 12:15:53.325: INFO: Pod "pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.494114ms May 22 12:15:55.409: INFO: Pod "pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095954087s May 22 12:15:57.412: INFO: Pod "pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099381088s STEP: Saw pod success May 22 12:15:57.412: INFO: Pod "pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:15:57.415: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 22 12:15:57.443: INFO: Waiting for pod pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018 to disappear May 22 12:15:57.452: INFO: Pod pod-projected-secrets-fb5c481e-9c25-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:15:57.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kmrbp" for this suite. May 22 12:16:03.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:16:03.504: INFO: namespace: e2e-tests-projected-kmrbp, resource: bindings, ignored listing per whitelist May 22 12:16:03.540: INFO: namespace e2e-tests-projected-kmrbp deletion completed in 6.085848901s • [SLOW TEST:10.354 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:16:03.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 22 12:16:03.734: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:03.736: INFO: Number of nodes with available pods: 0 May 22 12:16:03.736: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:04.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:04.744: INFO: Number of nodes with available pods: 0 May 22 12:16:04.744: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:05.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:05.744: INFO: Number of nodes with available pods: 0 May 22 12:16:05.744: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:06.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:06.744: INFO: Number of nodes with available pods: 0 May 22 12:16:06.744: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:07.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:07.744: INFO: Number of nodes with available pods: 0 May 22 12:16:07.744: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:08.741: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:08.745: INFO: Number of nodes with available pods: 2 May 22 12:16:08.745: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 22 12:16:08.758: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:08.760: INFO: Number of nodes with available pods: 1 May 22 12:16:08.760: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:09.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:09.771: INFO: Number of nodes with available pods: 1 May 22 12:16:09.771: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:10.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:10.770: INFO: Number of nodes with available pods: 1 May 22 12:16:10.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:11.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:11.770: INFO: Number of nodes with available pods: 1 May 22 12:16:11.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:12.765: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:12.768: INFO: Number of nodes with available pods: 1 May 22 12:16:12.768: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:13.767: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:13.770: INFO: Number of nodes with available pods: 1 May 22 12:16:13.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:14.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:14.769: INFO: Number of nodes with available pods: 1 May 22 12:16:14.769: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:15.765: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:15.769: INFO: Number of nodes with available pods: 1 May 22 12:16:15.769: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:16.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:16.770: INFO: Number of nodes with available pods: 1 May 22 12:16:16.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:17.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:17.770: INFO: Number of nodes with available pods: 1 May 22 12:16:17.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:18.764: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:18.767: INFO: Number of nodes with available pods: 1 May 22 12:16:18.767: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:19.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:19.770: INFO: Number of nodes with available pods: 1 May 22 12:16:19.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:20.765: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:20.767: INFO: Number of nodes with available pods: 1 May 22 12:16:20.767: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:21.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:21.770: INFO: Number of nodes with available pods: 1 May 22 12:16:21.770: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:22.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:22.769: INFO: Number of nodes with available pods: 1 May 22 12:16:22.769: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:23.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:23.769: INFO: Number of nodes with available pods: 1 May 22 12:16:23.769: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:24.767: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:24.771: INFO: Number of nodes with available pods: 1 May 22 12:16:24.771: INFO: Node hunter-worker is running more than one daemon pod May 22 12:16:25.766: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 12:16:25.769: INFO: Number of nodes with available pods: 2 May 22 12:16:25.769: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jh9wp, will wait for the garbage collector to delete the pods May 22 12:16:25.832: INFO: Deleting DaemonSet.extensions daemon-set took: 6.946805ms May 22 12:16:25.932: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.264038ms May 22 12:16:41.337: INFO: Number of nodes with available pods: 0 May 22 12:16:41.337: INFO: Number of running nodes: 0, number of available pods: 0 May 22 12:16:41.340: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jh9wp/daemonsets","resourceVersion":"11926223"},"items":null} May 22 12:16:41.343: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jh9wp/pods","resourceVersion":"11926223"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:16:41.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jh9wp" for this suite. May 22 12:16:47.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:16:47.415: INFO: namespace: e2e-tests-daemonsets-jh9wp, resource: bindings, ignored listing per whitelist May 22 12:16:47.490: INFO: namespace e2e-tests-daemonsets-jh9wp deletion completed in 6.132663717s • [SLOW TEST:43.949 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:16:47.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 22 12:16:52.149: INFO: Successfully updated pod "annotationupdate1bb845d5-9c26-11ea-8e9c-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:16:54.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mfgwd" for this suite. May 22 12:17:16.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:17:16.220: INFO: namespace: e2e-tests-downward-api-mfgwd, resource: bindings, ignored listing per whitelist May 22 12:17:16.279: INFO: namespace e2e-tests-downward-api-mfgwd deletion completed in 22.090765902s • [SLOW TEST:28.789 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:17:16.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 22 12:17:16.384: INFO: Waiting up to 5m0s for pod "client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018" in namespace "e2e-tests-containers-wgwzn" to be "success or failure" May 22 12:17:16.388: INFO: Pod "client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716266ms May 22 12:17:18.404: INFO: Pod "client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020105232s May 22 12:17:20.408: INFO: Pod "client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024410061s STEP: Saw pod success May 22 12:17:20.408: INFO: Pod "client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:17:20.411: INFO: Trying to get logs from node hunter-worker pod client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 12:17:20.431: INFO: Waiting for pod client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018 to disappear May 22 12:17:20.435: INFO: Pod client-containers-2cde34ab-9c26-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:17:20.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-wgwzn" for this suite. May 22 12:17:26.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:17:26.770: INFO: namespace: e2e-tests-containers-wgwzn, resource: bindings, ignored listing per whitelist May 22 12:17:26.782: INFO: namespace e2e-tests-containers-wgwzn deletion completed in 6.28757842s • [SLOW TEST:10.503 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:17:26.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 22 12:17:26.878: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926392,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 12:17:26.878: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926392,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 22 12:17:36.885: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926412,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 22 12:17:36.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926412,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 22 12:17:46.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926432,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 12:17:46.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926432,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 22 12:17:56.899: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926452,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 12:17:56.899: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-a,UID:332068e5-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926452,Generation:0,CreationTimestamp:2020-05-22 12:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 22 12:18:06.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-b,UID:4afd3067-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926472,Generation:0,CreationTimestamp:2020-05-22 12:18:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 12:18:06.907: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-b,UID:4afd3067-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926472,Generation:0,CreationTimestamp:2020-05-22 12:18:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 22 12:18:16.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-b,UID:4afd3067-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926492,Generation:0,CreationTimestamp:2020-05-22 12:18:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 12:18:16.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vf7mp,SelfLink:/api/v1/namespaces/e2e-tests-watch-vf7mp/configmaps/e2e-watch-test-configmap-b,UID:4afd3067-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926492,Generation:0,CreationTimestamp:2020-05-22 12:18:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:18:26.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-vf7mp" for this suite. May 22 12:18:32.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:18:33.003: INFO: namespace: e2e-tests-watch-vf7mp, resource: bindings, ignored listing per whitelist May 22 12:18:33.023: INFO: namespace e2e-tests-watch-vf7mp deletion completed in 6.102552336s • [SLOW TEST:66.240 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:18:33.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-5a9fcc13-9c26-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:18:39.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-d7ftm" for this suite. May 22 12:19:01.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:19:01.371: INFO: namespace: e2e-tests-configmap-d7ftm, resource: bindings, ignored listing per whitelist May 22 12:19:01.381: INFO: namespace e2e-tests-configmap-d7ftm deletion completed in 22.093802491s • [SLOW TEST:28.357 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:19:01.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 22 12:19:05.543: INFO: Pod pod-hostip-6b84918f-9c26-11ea-8e9c-0242ac110018 has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:19:05.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5jw57" for this suite. May 22 12:19:27.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:19:27.604: INFO: namespace: e2e-tests-pods-5jw57, resource: bindings, ignored listing per whitelist May 22 12:19:27.626: INFO: namespace e2e-tests-pods-5jw57 deletion completed in 22.079061981s • [SLOW TEST:26.245 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:19:27.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-jl55m;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jl55m.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jl55m.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jl55m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 48.106.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.106.48_udp@PTR;check="$$(dig +tcp +noall +answer +search 48.106.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.106.48_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-jl55m;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-jl55m;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jl55m.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jl55m.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jl55m.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jl55m.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jl55m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 48.106.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.106.48_udp@PTR;check="$$(dig +tcp +noall +answer +search 48.106.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.106.48_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 12:19:35.849: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.890: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.892: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.895: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.898: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.901: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.904: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.907: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.910: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:35.928: INFO: Lookups using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jl55m jessie_tcp@dns-test-service.e2e-tests-dns-jl55m jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc] May 22 12:19:40.951: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.977: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.980: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.983: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.985: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.988: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.991: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:40.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:41.012: INFO: Lookups using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jl55m jessie_tcp@dns-test-service.e2e-tests-dns-jl55m jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc] May 22 12:19:45.947: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.972: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.975: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.978: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.981: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.984: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.987: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:45.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:46.009: INFO: Lookups using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jl55m jessie_tcp@dns-test-service.e2e-tests-dns-jl55m jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc] May 22 12:19:50.948: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.972: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.975: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.977: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.980: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.982: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.984: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.987: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:50.989: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:51.006: INFO: Lookups using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jl55m jessie_tcp@dns-test-service.e2e-tests-dns-jl55m jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc] May 22 12:19:55.949: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.975: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.979: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.982: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.986: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.989: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.992: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:55.998: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:19:56.014: INFO: Lookups using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jl55m jessie_tcp@dns-test-service.e2e-tests-dns-jl55m jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc] May 22 12:20:00.950: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.976: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.979: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.982: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.985: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.988: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.991: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.994: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:00.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc from pod e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018: the server could not find the requested resource (get pods dns-test-7b323762-9c26-11ea-8e9c-0242ac110018) May 22 12:20:01.012: INFO: Lookups using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jl55m jessie_tcp@dns-test-service.e2e-tests-dns-jl55m jessie_udp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@dns-test-service.e2e-tests-dns-jl55m.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jl55m.svc] May 22 12:20:06.029: INFO: DNS probes using e2e-tests-dns-jl55m/dns-test-7b323762-9c26-11ea-8e9c-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:20:06.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-jl55m" for this suite. May 22 12:20:12.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:20:13.034: INFO: namespace: e2e-tests-dns-jl55m, resource: bindings, ignored listing per whitelist May 22 12:20:13.050: INFO: namespace e2e-tests-dns-jl55m deletion completed in 6.08293595s • [SLOW TEST:45.424 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:20:13.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-96421ff2-9c26-11ea-8e9c-0242ac110018 STEP: Creating secret with name s-test-opt-upd-9642204a-9c26-11ea-8e9c-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-96421ff2-9c26-11ea-8e9c-0242ac110018 STEP: Updating secret s-test-opt-upd-9642204a-9c26-11ea-8e9c-0242ac110018 STEP: Creating secret with name s-test-opt-create-9642206e-9c26-11ea-8e9c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:20:21.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-klxd5" for this suite. May 22 12:20:45.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:20:45.454: INFO: namespace: e2e-tests-projected-klxd5, resource: bindings, ignored listing per whitelist May 22 12:20:45.454: INFO: namespace e2e-tests-projected-klxd5 deletion completed in 24.08841918s • [SLOW TEST:32.404 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:20:45.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:20:45.559: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 22 12:20:50.572: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 22 12:20:50.572: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 22 12:20:50.588: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-8rt2s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8rt2s/deployments/test-cleanup-deployment,UID:ac8c078f-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926964,Generation:1,CreationTimestamp:2020-05-22 12:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 22 12:20:50.595: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 22 12:20:50.595: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 22 12:20:50.596: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-8rt2s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8rt2s/replicasets/test-cleanup-controller,UID:a98b5510-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926965,Generation:1,CreationTimestamp:2020-05-22 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ac8c078f-9c26-11ea-99e8-0242ac110002 0xc0019f7127 0xc0019f7128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 22 12:20:50.615: INFO: Pod "test-cleanup-controller-4j7g7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-4j7g7,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-8rt2s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8rt2s/pods/test-cleanup-controller-4j7g7,UID:a98f1093-9c26-11ea-99e8-0242ac110002,ResourceVersion:11926956,Generation:0,CreationTimestamp:2020-05-22 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller a98b5510-9c26-11ea-99e8-0242ac110002 0xc000f4cd87 0xc000f4cd88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qfdxm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qfdxm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qfdxm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000f4ce00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000f4ce20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:20:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:20:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:20:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:20:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.44,StartTime:2020-05-22 12:20:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 12:20:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f31a796e578352bd003cbf6ecff28eb4ad51baf07266da0183fd3245e51a163c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:20:50.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8rt2s" for this suite. May 22 12:20:56.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:20:56.761: INFO: namespace: e2e-tests-deployment-8rt2s, resource: bindings, ignored listing per whitelist May 22 12:20:56.807: INFO: namespace e2e-tests-deployment-8rt2s deletion completed in 6.142424239s • [SLOW TEST:11.352 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:20:56.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 12:20:56.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z2827' May 22 12:20:57.123: INFO: stderr: "" May 22 12:20:57.123: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 22 12:21:02.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z2827 -o json' May 22 12:21:02.270: INFO: stderr: "" May 22 12:21:02.270: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-22T12:20:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-z2827\",\n \"resourceVersion\": \"11927034\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-z2827/pods/e2e-test-nginx-pod\",\n \"uid\": \"b07005c3-9c26-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-72wv8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-72wv8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-72wv8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T12:20:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T12:21:00Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T12:21:00Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T12:20:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://74c463c1ee06758ac07b3671fb3d3183b607d7920ea13eb6dbef85c625c7d0ab\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-22T12:20:59Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.46\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-22T12:20:57Z\"\n }\n}\n" STEP: replace the image in the pod May 22 12:21:02.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-z2827' May 22 12:21:02.529: INFO: stderr: "" May 22 12:21:02.530: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 22 12:21:02.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z2827' May 22 12:21:06.291: INFO: stderr: "" May 22 12:21:06.291: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:21:06.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z2827" for this suite. May 22 12:21:12.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:21:12.366: INFO: namespace: e2e-tests-kubectl-z2827, resource: bindings, ignored listing per whitelist May 22 12:21:12.414: INFO: namespace e2e-tests-kubectl-z2827 deletion completed in 6.118856751s • [SLOW TEST:15.607 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:21:12.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 22 12:21:12.554: INFO: Waiting up to 5m0s for pod "client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018" in namespace "e2e-tests-containers-2wjpp" to be "success or failure" May 22 12:21:12.560: INFO: Pod "client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.698325ms May 22 12:21:14.564: INFO: Pod "client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010039769s May 22 12:21:16.567: INFO: Pod "client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013611056s STEP: Saw pod success May 22 12:21:16.567: INFO: Pod "client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:21:16.570: INFO: Trying to get logs from node hunter-worker pod client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 12:21:16.584: INFO: Waiting for pod client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018 to disappear May 22 12:21:16.595: INFO: Pod client-containers-b99ffe5a-9c26-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:21:16.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-2wjpp" for this suite. May 22 12:21:22.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:21:22.658: INFO: namespace: e2e-tests-containers-2wjpp, resource: bindings, ignored listing per whitelist May 22 12:21:22.693: INFO: namespace e2e-tests-containers-2wjpp deletion completed in 6.094610857s • [SLOW TEST:10.280 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:21:22.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:22:22.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w9h2t" for this suite. May 22 12:22:44.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:22:44.882: INFO: namespace: e2e-tests-container-probe-w9h2t, resource: bindings, ignored listing per whitelist May 22 12:22:44.934: INFO: namespace e2e-tests-container-probe-w9h2t deletion completed in 22.075897538s • [SLOW TEST:82.240 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:22:44.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6h4bf [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 22 12:22:45.070: INFO: Found 0 stateful pods, waiting for 3 May 22 12:22:55.074: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 12:22:55.074: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 12:22:55.074: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 22 12:23:05.075: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 12:23:05.075: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 12:23:05.075: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 22 12:23:05.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h4bf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 12:23:05.338: INFO: stderr: "I0522 12:23:05.214029 2570 log.go:172] (0xc0008542c0) (0xc000699220) Create stream\nI0522 12:23:05.214113 2570 log.go:172] (0xc0008542c0) (0xc000699220) Stream added, broadcasting: 1\nI0522 12:23:05.217544 2570 log.go:172] (0xc0008542c0) Reply frame received for 1\nI0522 12:23:05.217593 2570 log.go:172] (0xc0008542c0) (0xc0002a6000) Create stream\nI0522 12:23:05.217614 2570 log.go:172] (0xc0008542c0) (0xc0002a6000) Stream added, broadcasting: 3\nI0522 12:23:05.218755 2570 log.go:172] (0xc0008542c0) Reply frame received for 3\nI0522 12:23:05.218797 2570 log.go:172] (0xc0008542c0) (0xc00065a000) Create stream\nI0522 12:23:05.218823 2570 log.go:172] (0xc0008542c0) (0xc00065a000) Stream added, broadcasting: 5\nI0522 12:23:05.219950 2570 log.go:172] (0xc0008542c0) Reply frame received for 5\nI0522 12:23:05.328892 2570 log.go:172] (0xc0008542c0) Data frame received for 5\nI0522 12:23:05.328919 2570 log.go:172] (0xc00065a000) (5) Data frame handling\nI0522 12:23:05.328949 2570 log.go:172] (0xc0008542c0) Data frame received for 3\nI0522 12:23:05.328980 2570 log.go:172] (0xc0002a6000) (3) Data frame handling\nI0522 12:23:05.329013 2570 log.go:172] (0xc0002a6000) (3) Data frame sent\nI0522 12:23:05.329050 2570 log.go:172] (0xc0008542c0) Data frame received for 3\nI0522 12:23:05.329063 2570 log.go:172] (0xc0002a6000) (3) Data frame handling\nI0522 12:23:05.330963 2570 log.go:172] (0xc0008542c0) Data frame received for 1\nI0522 12:23:05.330982 2570 log.go:172] (0xc000699220) (1) Data frame handling\nI0522 12:23:05.330991 2570 log.go:172] (0xc000699220) (1) Data frame sent\nI0522 12:23:05.331001 2570 log.go:172] (0xc0008542c0) (0xc000699220) Stream removed, broadcasting: 1\nI0522 12:23:05.331018 2570 log.go:172] (0xc0008542c0) Go away received\nI0522 12:23:05.331256 2570 log.go:172] (0xc0008542c0) (0xc000699220) Stream removed, broadcasting: 1\nI0522 12:23:05.331273 2570 log.go:172] (0xc0008542c0) (0xc0002a6000) Stream removed, broadcasting: 3\nI0522 12:23:05.331283 2570 log.go:172] (0xc0008542c0) (0xc00065a000) Stream removed, broadcasting: 5\n" May 22 12:23:05.338: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 12:23:05.338: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 22 12:23:15.370: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 22 12:23:25.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h4bf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 12:23:25.649: INFO: stderr: "I0522 12:23:25.583311 2592 log.go:172] (0xc00014c790) (0xc00071e640) Create stream\nI0522 12:23:25.583377 2592 log.go:172] (0xc00014c790) (0xc00071e640) Stream added, broadcasting: 1\nI0522 12:23:25.586250 2592 log.go:172] (0xc00014c790) Reply frame received for 1\nI0522 12:23:25.586301 2592 log.go:172] (0xc00014c790) (0xc0007b4dc0) Create stream\nI0522 12:23:25.586315 2592 log.go:172] (0xc00014c790) (0xc0007b4dc0) Stream added, broadcasting: 3\nI0522 12:23:25.587180 2592 log.go:172] (0xc00014c790) Reply frame received for 3\nI0522 12:23:25.587240 2592 log.go:172] (0xc00014c790) (0xc00036a000) Create stream\nI0522 12:23:25.587260 2592 log.go:172] (0xc00014c790) (0xc00036a000) Stream added, broadcasting: 5\nI0522 12:23:25.588257 2592 log.go:172] (0xc00014c790) Reply frame received for 5\nI0522 12:23:25.642294 2592 log.go:172] (0xc00014c790) Data frame received for 5\nI0522 12:23:25.642338 2592 log.go:172] (0xc00036a000) (5) Data frame handling\nI0522 12:23:25.642364 2592 log.go:172] (0xc00014c790) Data frame received for 3\nI0522 12:23:25.642374 2592 log.go:172] (0xc0007b4dc0) (3) Data frame handling\nI0522 12:23:25.642391 2592 log.go:172] (0xc0007b4dc0) (3) Data frame sent\nI0522 12:23:25.642408 2592 log.go:172] (0xc00014c790) Data frame received for 3\nI0522 12:23:25.642419 2592 log.go:172] (0xc0007b4dc0) (3) Data frame handling\nI0522 12:23:25.643764 2592 log.go:172] (0xc00014c790) Data frame received for 1\nI0522 12:23:25.643783 2592 log.go:172] (0xc00071e640) (1) Data frame handling\nI0522 12:23:25.643792 2592 log.go:172] (0xc00071e640) (1) Data frame sent\nI0522 12:23:25.643867 2592 log.go:172] (0xc00014c790) (0xc00071e640) Stream removed, broadcasting: 1\nI0522 12:23:25.643945 2592 log.go:172] (0xc00014c790) Go away received\nI0522 12:23:25.644108 2592 log.go:172] (0xc00014c790) (0xc00071e640) Stream removed, broadcasting: 1\nI0522 12:23:25.644142 2592 log.go:172] (0xc00014c790) (0xc0007b4dc0) Stream removed, broadcasting: 3\nI0522 12:23:25.644160 2592 log.go:172] (0xc00014c790) (0xc00036a000) Stream removed, broadcasting: 5\n" May 22 12:23:25.649: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 12:23:25.649: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 12:23:45.669: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h4bf/ss2 to complete update May 22 12:23:45.669: INFO: Waiting for Pod e2e-tests-statefulset-6h4bf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 22 12:23:55.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h4bf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 12:23:56.020: INFO: stderr: "I0522 12:23:55.849801 2615 log.go:172] (0xc0008382c0) (0xc000625400) Create stream\nI0522 12:23:55.849870 2615 log.go:172] (0xc0008382c0) (0xc000625400) Stream added, broadcasting: 1\nI0522 12:23:55.851836 2615 log.go:172] (0xc0008382c0) Reply frame received for 1\nI0522 12:23:55.851874 2615 log.go:172] (0xc0008382c0) (0xc0001f2000) Create stream\nI0522 12:23:55.851884 2615 log.go:172] (0xc0008382c0) (0xc0001f2000) Stream added, broadcasting: 3\nI0522 12:23:55.852815 2615 log.go:172] (0xc0008382c0) Reply frame received for 3\nI0522 12:23:55.852869 2615 log.go:172] (0xc0008382c0) (0xc0006de000) Create stream\nI0522 12:23:55.852886 2615 log.go:172] (0xc0008382c0) (0xc0006de000) Stream added, broadcasting: 5\nI0522 12:23:55.853788 2615 log.go:172] (0xc0008382c0) Reply frame received for 5\nI0522 12:23:56.012588 2615 log.go:172] (0xc0008382c0) Data frame received for 3\nI0522 12:23:56.012624 2615 log.go:172] (0xc0001f2000) (3) Data frame handling\nI0522 12:23:56.012639 2615 log.go:172] (0xc0001f2000) (3) Data frame sent\nI0522 12:23:56.013452 2615 log.go:172] (0xc0008382c0) Data frame received for 3\nI0522 12:23:56.013492 2615 log.go:172] (0xc0001f2000) (3) Data frame handling\nI0522 12:23:56.014067 2615 log.go:172] (0xc0008382c0) Data frame received for 5\nI0522 12:23:56.014088 2615 log.go:172] (0xc0006de000) (5) Data frame handling\nI0522 12:23:56.015849 2615 log.go:172] (0xc0008382c0) Data frame received for 1\nI0522 12:23:56.015868 2615 log.go:172] (0xc000625400) (1) Data frame handling\nI0522 12:23:56.015900 2615 log.go:172] (0xc000625400) (1) Data frame sent\nI0522 12:23:56.015923 2615 log.go:172] (0xc0008382c0) (0xc000625400) Stream removed, broadcasting: 1\nI0522 12:23:56.016163 2615 log.go:172] (0xc0008382c0) (0xc000625400) Stream removed, broadcasting: 1\nI0522 12:23:56.016185 2615 log.go:172] (0xc0008382c0) (0xc0001f2000) Stream removed, broadcasting: 3\nI0522 12:23:56.016197 2615 log.go:172] (0xc0008382c0) (0xc0006de000) Stream removed, broadcasting: 5\n" May 22 12:23:56.020: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 12:23:56.020: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 12:24:06.054: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 22 12:24:16.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h4bf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 12:24:16.292: INFO: stderr: "I0522 12:24:16.202407 2638 log.go:172] (0xc000726370) (0xc000766640) Create stream\nI0522 12:24:16.202458 2638 log.go:172] (0xc000726370) (0xc000766640) Stream added, broadcasting: 1\nI0522 12:24:16.204260 2638 log.go:172] (0xc000726370) Reply frame received for 1\nI0522 12:24:16.204319 2638 log.go:172] (0xc000726370) (0xc0005e0c80) Create stream\nI0522 12:24:16.204332 2638 log.go:172] (0xc000726370) (0xc0005e0c80) Stream added, broadcasting: 3\nI0522 12:24:16.205096 2638 log.go:172] (0xc000726370) Reply frame received for 3\nI0522 12:24:16.205299 2638 log.go:172] (0xc000726370) (0xc0000ee000) Create stream\nI0522 12:24:16.205321 2638 log.go:172] (0xc000726370) (0xc0000ee000) Stream added, broadcasting: 5\nI0522 12:24:16.206176 2638 log.go:172] (0xc000726370) Reply frame received for 5\nI0522 12:24:16.284163 2638 log.go:172] (0xc000726370) Data frame received for 5\nI0522 12:24:16.284201 2638 log.go:172] (0xc0000ee000) (5) Data frame handling\nI0522 12:24:16.284222 2638 log.go:172] (0xc000726370) Data frame received for 3\nI0522 12:24:16.284228 2638 log.go:172] (0xc0005e0c80) (3) Data frame handling\nI0522 12:24:16.284234 2638 log.go:172] (0xc0005e0c80) (3) Data frame sent\nI0522 12:24:16.284239 2638 log.go:172] (0xc000726370) Data frame received for 3\nI0522 12:24:16.284242 2638 log.go:172] (0xc0005e0c80) (3) Data frame handling\nI0522 12:24:16.285675 2638 log.go:172] (0xc000726370) Data frame received for 1\nI0522 12:24:16.285703 2638 log.go:172] (0xc000766640) (1) Data frame handling\nI0522 12:24:16.285724 2638 log.go:172] (0xc000766640) (1) Data frame sent\nI0522 12:24:16.285741 2638 log.go:172] (0xc000726370) (0xc000766640) Stream removed, broadcasting: 1\nI0522 12:24:16.285914 2638 log.go:172] (0xc000726370) (0xc000766640) Stream removed, broadcasting: 1\nI0522 12:24:16.285933 2638 log.go:172] (0xc000726370) (0xc0005e0c80) Stream removed, broadcasting: 3\nI0522 12:24:16.286092 2638 log.go:172] (0xc000726370) (0xc0000ee000) Stream removed, broadcasting: 5\nI0522 12:24:16.286114 2638 log.go:172] (0xc000726370) Go away received\n" May 22 12:24:16.292: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 12:24:16.292: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 12:24:36.321: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h4bf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 22 12:24:46.330: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6h4bf May 22 12:24:46.333: INFO: Scaling statefulset ss2 to 0 May 22 12:25:16.367: INFO: Waiting for statefulset status.replicas updated to 0 May 22 12:25:16.370: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:25:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6h4bf" for this suite. May 22 12:25:22.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:25:22.526: INFO: namespace: e2e-tests-statefulset-6h4bf, resource: bindings, ignored listing per whitelist May 22 12:25:22.548: INFO: namespace e2e-tests-statefulset-6h4bf deletion completed in 6.157858023s • [SLOW TEST:157.615 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:25:22.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 12:25:22.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-286zm" to be "success or failure" May 22 12:25:22.661: INFO: Pod "downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.956215ms May 22 12:25:24.671: INFO: Pod "downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030270119s May 22 12:25:26.674: INFO: Pod "downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033733438s STEP: Saw pod success May 22 12:25:26.674: INFO: Pod "downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:25:26.677: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 12:25:26.743: INFO: Waiting for pod downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018 to disappear May 22 12:25:26.768: INFO: Pod downwardapi-volume-4eb4b26b-9c27-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:25:26.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-286zm" for this suite. May 22 12:25:32.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:25:32.837: INFO: namespace: e2e-tests-downward-api-286zm, resource: bindings, ignored listing per whitelist May 22 12:25:32.875: INFO: namespace e2e-tests-downward-api-286zm deletion completed in 6.104129695s • [SLOW TEST:10.327 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:25:32.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-8xnt STEP: Creating a pod to test atomic-volume-subpath May 22 12:25:33.004: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8xnt" in namespace "e2e-tests-subpath-d4ds7" to be "success or failure" May 22 12:25:33.018: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.024087ms May 22 12:25:35.022: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01778516s May 22 12:25:37.026: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021733035s May 22 12:25:39.029: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=true. Elapsed: 6.024252955s May 22 12:25:41.033: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 8.028614394s May 22 12:25:43.038: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 10.033269548s May 22 12:25:45.042: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 12.037347873s May 22 12:25:47.047: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 14.042345708s May 22 12:25:49.051: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 16.046265012s May 22 12:25:51.054: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 18.05011298s May 22 12:25:53.058: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 20.054174341s May 22 12:25:55.063: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 22.058330769s May 22 12:25:57.067: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Running", Reason="", readiness=false. Elapsed: 24.062910434s May 22 12:25:59.071: INFO: Pod "pod-subpath-test-secret-8xnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.067224459s STEP: Saw pod success May 22 12:25:59.072: INFO: Pod "pod-subpath-test-secret-8xnt" satisfied condition "success or failure" May 22 12:25:59.075: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-8xnt container test-container-subpath-secret-8xnt: STEP: delete the pod May 22 12:25:59.126: INFO: Waiting for pod pod-subpath-test-secret-8xnt to disappear May 22 12:25:59.136: INFO: Pod pod-subpath-test-secret-8xnt no longer exists STEP: Deleting pod pod-subpath-test-secret-8xnt May 22 12:25:59.136: INFO: Deleting pod "pod-subpath-test-secret-8xnt" in namespace "e2e-tests-subpath-d4ds7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:25:59.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-d4ds7" for this suite. May 22 12:26:05.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:26:05.232: INFO: namespace: e2e-tests-subpath-d4ds7, resource: bindings, ignored listing per whitelist May 22 12:26:05.256: INFO: namespace e2e-tests-subpath-d4ds7 deletion completed in 6.111427001s • [SLOW TEST:32.381 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:26:05.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 22 12:26:05.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:07.913: INFO: stderr: "" May 22 12:26:07.913: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 12:26:07.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:08.038: INFO: stderr: "" May 22 12:26:08.038: INFO: stdout: "update-demo-nautilus-657xs update-demo-nautilus-6q544 " May 22 12:26:08.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-657xs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:08.142: INFO: stderr: "" May 22 12:26:08.142: INFO: stdout: "" May 22 12:26:08.142: INFO: update-demo-nautilus-657xs is created but not running May 22 12:26:13.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:13.249: INFO: stderr: "" May 22 12:26:13.249: INFO: stdout: "update-demo-nautilus-657xs update-demo-nautilus-6q544 " May 22 12:26:13.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-657xs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:13.365: INFO: stderr: "" May 22 12:26:13.365: INFO: stdout: "true" May 22 12:26:13.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-657xs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:13.469: INFO: stderr: "" May 22 12:26:13.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 12:26:13.469: INFO: validating pod update-demo-nautilus-657xs May 22 12:26:13.478: INFO: got data: { "image": "nautilus.jpg" } May 22 12:26:13.478: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 12:26:13.478: INFO: update-demo-nautilus-657xs is verified up and running May 22 12:26:13.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6q544 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:13.581: INFO: stderr: "" May 22 12:26:13.581: INFO: stdout: "true" May 22 12:26:13.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6q544 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:13.679: INFO: stderr: "" May 22 12:26:13.679: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 12:26:13.679: INFO: validating pod update-demo-nautilus-6q544 May 22 12:26:13.685: INFO: got data: { "image": "nautilus.jpg" } May 22 12:26:13.685: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 12:26:13.685: INFO: update-demo-nautilus-6q544 is verified up and running STEP: rolling-update to new replication controller May 22 12:26:13.687: INFO: scanned /root for discovery docs: May 22 12:26:13.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:36.307: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 22 12:26:36.307: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 12:26:36.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:36.414: INFO: stderr: "" May 22 12:26:36.414: INFO: stdout: "update-demo-kitten-lgbz4 update-demo-kitten-pn4v7 " May 22 12:26:36.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lgbz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:36.514: INFO: stderr: "" May 22 12:26:36.514: INFO: stdout: "true" May 22 12:26:36.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lgbz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:36.606: INFO: stderr: "" May 22 12:26:36.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 22 12:26:36.606: INFO: validating pod update-demo-kitten-lgbz4 May 22 12:26:36.616: INFO: got data: { "image": "kitten.jpg" } May 22 12:26:36.616: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 22 12:26:36.616: INFO: update-demo-kitten-lgbz4 is verified up and running May 22 12:26:36.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pn4v7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:36.711: INFO: stderr: "" May 22 12:26:36.711: INFO: stdout: "true" May 22 12:26:36.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pn4v7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7zwv7' May 22 12:26:36.808: INFO: stderr: "" May 22 12:26:36.808: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 22 12:26:36.808: INFO: validating pod update-demo-kitten-pn4v7 May 22 12:26:36.813: INFO: got data: { "image": "kitten.jpg" } May 22 12:26:36.813: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 22 12:26:36.813: INFO: update-demo-kitten-pn4v7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:26:36.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7zwv7" for this suite. May 22 12:26:58.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:26:58.869: INFO: namespace: e2e-tests-kubectl-7zwv7, resource: bindings, ignored listing per whitelist May 22 12:26:58.907: INFO: namespace e2e-tests-kubectl-7zwv7 deletion completed in 22.090439316s • [SLOW TEST:53.650 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:26:58.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:26:58.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 22 12:26:59.060: INFO: stderr: "" May 22 12:26:59.060: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 22 12:26:59.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x8ckt' May 22 12:26:59.329: INFO: stderr: "" May 22 12:26:59.329: INFO: stdout: "replicationcontroller/redis-master created\n" May 22 12:26:59.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x8ckt' May 22 12:26:59.674: INFO: stderr: "" May 22 12:26:59.674: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 22 12:27:00.678: INFO: Selector matched 1 pods for map[app:redis] May 22 12:27:00.678: INFO: Found 0 / 1 May 22 12:27:01.679: INFO: Selector matched 1 pods for map[app:redis] May 22 12:27:01.679: INFO: Found 0 / 1 May 22 12:27:02.678: INFO: Selector matched 1 pods for map[app:redis] May 22 12:27:02.678: INFO: Found 0 / 1 May 22 12:27:03.679: INFO: Selector matched 1 pods for map[app:redis] May 22 12:27:03.679: INFO: Found 1 / 1 May 22 12:27:03.679: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 22 12:27:03.683: INFO: Selector matched 1 pods for map[app:redis] May 22 12:27:03.683: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 22 12:27:03.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-ltklc --namespace=e2e-tests-kubectl-x8ckt' May 22 12:27:03.827: INFO: stderr: "" May 22 12:27:03.827: INFO: stdout: "Name: redis-master-ltklc\nNamespace: e2e-tests-kubectl-x8ckt\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Fri, 22 May 2020 12:26:59 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.55\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://960c22b4679a7438622f38684a65c8ed3051c0a06a58d649f1f9410fee8bd785\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 22 May 2020 12:27:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-fgfpp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-fgfpp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-fgfpp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-x8ckt/redis-master-ltklc to hunter-worker2\n Normal Pulled 3s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 22 12:27:03.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-x8ckt' May 22 12:27:03.945: INFO: stderr: "" May 22 12:27:03.945: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-x8ckt\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-ltklc\n" May 22 12:27:03.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-x8ckt' May 22 12:27:04.044: INFO: stderr: "" May 22 12:27:04.044: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-x8ckt\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.103.252.70\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.55:6379\nSession Affinity: None\nEvents: \n" May 22 12:27:04.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 22 12:27:04.188: INFO: stderr: "" May 22 12:27:04.188: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 22 May 2020 12:26:59 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 22 May 2020 12:26:59 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 22 May 2020 12:26:59 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 22 May 2020 12:26:59 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 67d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 67d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 67d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 22 12:27:04.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-x8ckt' May 22 12:27:04.294: INFO: stderr: "" May 22 12:27:04.294: INFO: stdout: "Name: e2e-tests-kubectl-x8ckt\nLabels: e2e-framework=kubectl\n e2e-run=8d0c7d81-9c19-11ea-8e9c-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:27:04.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x8ckt" for this suite. May 22 12:27:26.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:27:26.326: INFO: namespace: e2e-tests-kubectl-x8ckt, resource: bindings, ignored listing per whitelist May 22 12:27:26.387: INFO: namespace e2e-tests-kubectl-x8ckt deletion completed in 22.090162531s • [SLOW TEST:27.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:27:26.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 22 12:27:26.480: INFO: Waiting up to 5m0s for pod "downward-api-98835d84-9c27-11ea-8e9c-0242ac110018" in namespace "e2e-tests-downward-api-9hrs4" to be "success or failure" May 22 12:27:26.495: INFO: Pod "downward-api-98835d84-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.579575ms May 22 12:27:28.500: INFO: Pod "downward-api-98835d84-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020421398s May 22 12:27:30.505: INFO: Pod "downward-api-98835d84-9c27-11ea-8e9c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.024675079s May 22 12:27:32.509: INFO: Pod "downward-api-98835d84-9c27-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029145914s STEP: Saw pod success May 22 12:27:32.509: INFO: Pod "downward-api-98835d84-9c27-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:27:32.511: INFO: Trying to get logs from node hunter-worker pod downward-api-98835d84-9c27-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 12:27:32.530: INFO: Waiting for pod downward-api-98835d84-9c27-11ea-8e9c-0242ac110018 to disappear May 22 12:27:32.534: INFO: Pod downward-api-98835d84-9c27-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:27:32.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9hrs4" for this suite. May 22 12:27:38.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:27:38.581: INFO: namespace: e2e-tests-downward-api-9hrs4, resource: bindings, ignored listing per whitelist May 22 12:27:38.632: INFO: namespace e2e-tests-downward-api-9hrs4 deletion completed in 6.095782685s • [SLOW TEST:12.245 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:27:38.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 12:27:38.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nn5v6' May 22 12:27:38.838: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 12:27:38.838: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 22 12:27:38.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-nn5v6' May 22 12:27:38.964: INFO: stderr: "" May 22 12:27:38.964: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:27:38.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nn5v6" for this suite. May 22 12:27:45.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:27:45.070: INFO: namespace: e2e-tests-kubectl-nn5v6, resource: bindings, ignored listing per whitelist May 22 12:27:45.074: INFO: namespace e2e-tests-kubectl-nn5v6 deletion completed in 6.083651193s • [SLOW TEST:6.441 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:27:45.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 22 12:27:49.748: INFO: Successfully updated pod "pod-update-a3ac0098-9c27-11ea-8e9c-0242ac110018" STEP: verifying the updated pod is in kubernetes May 22 12:27:49.767: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:27:49.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-82qdg" for this suite. May 22 12:28:11.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:28:11.810: INFO: namespace: e2e-tests-pods-82qdg, resource: bindings, ignored listing per whitelist May 22 12:28:11.865: INFO: namespace e2e-tests-pods-82qdg deletion completed in 22.094155662s • [SLOW TEST:26.791 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:28:11.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:28:18.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cfqnx" for this suite. May 22 12:28:24.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:28:24.286: INFO: namespace: e2e-tests-namespaces-cfqnx, resource: bindings, ignored listing per whitelist May 22 12:28:24.318: INFO: namespace e2e-tests-namespaces-cfqnx deletion completed in 6.090363193s STEP: Destroying namespace "e2e-tests-nsdeletetest-ddlmh" for this suite. May 22 12:28:24.321: INFO: Namespace e2e-tests-nsdeletetest-ddlmh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-454lq" for this suite. May 22 12:28:30.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:28:30.354: INFO: namespace: e2e-tests-nsdeletetest-454lq, resource: bindings, ignored listing per whitelist May 22 12:28:30.409: INFO: namespace e2e-tests-nsdeletetest-454lq deletion completed in 6.087652578s • [SLOW TEST:18.544 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:28:30.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 22 12:28:30.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018" in namespace "e2e-tests-projected-tbgp5" to be "success or failure" May 22 12:28:30.553: INFO: Pod "downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 39.251078ms May 22 12:28:32.558: INFO: Pod "downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044069618s May 22 12:28:34.562: INFO: Pod "downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048427511s STEP: Saw pod success May 22 12:28:34.563: INFO: Pod "downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:28:34.566: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018 container client-container: STEP: delete the pod May 22 12:28:34.596: INFO: Waiting for pod downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018 to disappear May 22 12:28:34.608: INFO: Pod downwardapi-volume-beaed110-9c27-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:28:34.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tbgp5" for this suite. May 22 12:28:40.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:28:40.637: INFO: namespace: e2e-tests-projected-tbgp5, resource: bindings, ignored listing per whitelist May 22 12:28:40.692: INFO: namespace e2e-tests-projected-tbgp5 deletion completed in 6.081558739s • [SLOW TEST:10.284 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:28:40.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 22 12:28:45.326: INFO: Successfully updated pod "annotationupdatec4d04a85-9c27-11ea-8e9c-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:28:49.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jlq8s" for this suite. May 22 12:29:11.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:29:11.429: INFO: namespace: e2e-tests-projected-jlq8s, resource: bindings, ignored listing per whitelist May 22 12:29:11.500: INFO: namespace e2e-tests-projected-jlq8s deletion completed in 22.127314932s • [SLOW TEST:30.808 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:29:11.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 22 12:29:11.640: INFO: Creating deployment "test-recreate-deployment" May 22 12:29:11.659: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 22 12:29:11.680: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 22 12:29:13.688: INFO: Waiting deployment "test-recreate-deployment" to complete May 22 12:29:13.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725747351, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725747351, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725747351, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725747351, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 12:29:15.696: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 22 12:29:15.704: INFO: Updating deployment test-recreate-deployment May 22 12:29:15.704: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 22 12:29:16.408: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-bkmkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkmkv/deployments/test-recreate-deployment,UID:d7347902-9c27-11ea-99e8-0242ac110002,ResourceVersion:11928839,Generation:2,CreationTimestamp:2020-05-22 12:29:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-22 12:29:16 +0000 UTC 2020-05-22 12:29:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-22 12:29:16 +0000 UTC 2020-05-22 12:29:11 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 22 12:29:16.412: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-bkmkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkmkv/replicasets/test-recreate-deployment-589c4bfd,UID:d9b3dfa1-9c27-11ea-99e8-0242ac110002,ResourceVersion:11928837,Generation:1,CreationTimestamp:2020-05-22 12:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d7347902-9c27-11ea-99e8-0242ac110002 0xc001c04a5f 0xc001c04c00}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 12:29:16.412: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 22 12:29:16.412: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-bkmkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkmkv/replicasets/test-recreate-deployment-5bf7f65dc,UID:d73a4c51-9c27-11ea-99e8-0242ac110002,ResourceVersion:11928825,Generation:2,CreationTimestamp:2020-05-22 12:29:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d7347902-9c27-11ea-99e8-0242ac110002 0xc001c04e00 0xc001c04e01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 12:29:16.416: INFO: Pod "test-recreate-deployment-589c4bfd-h7vms" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-h7vms,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-bkmkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bkmkv/pods/test-recreate-deployment-589c4bfd-h7vms,UID:d9ba219c-9c27-11ea-99e8-0242ac110002,ResourceVersion:11928838,Generation:0,CreationTimestamp:2020-05-22 12:29:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd d9b3dfa1-9c27-11ea-99e8-0242ac110002 0xc001c059bf 0xc001c05af0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vwg8q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vwg8q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vwg8q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c05b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c05cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-22 12:29:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:29:16.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bkmkv" for this suite. May 22 12:29:22.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:29:22.628: INFO: namespace: e2e-tests-deployment-bkmkv, resource: bindings, ignored listing per whitelist May 22 12:29:22.652: INFO: namespace e2e-tests-deployment-bkmkv deletion completed in 6.232162623s • [SLOW TEST:11.151 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:29:22.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 22 12:29:22.754: INFO: Waiting up to 5m0s for pod "var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018" in namespace "e2e-tests-var-expansion-slh9d" to be "success or failure" May 22 12:29:22.768: INFO: Pod "var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.019724ms May 22 12:29:24.773: INFO: Pod "var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018608955s May 22 12:29:26.777: INFO: Pod "var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022870108s STEP: Saw pod success May 22 12:29:26.777: INFO: Pod "var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:29:26.781: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018 container dapi-container: STEP: delete the pod May 22 12:29:26.823: INFO: Waiting for pod var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018 to disappear May 22 12:29:26.838: INFO: Pod var-expansion-ddd340ef-9c27-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:29:26.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-slh9d" for this suite. May 22 12:29:32.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:29:32.893: INFO: namespace: e2e-tests-var-expansion-slh9d, resource: bindings, ignored listing per whitelist May 22 12:29:32.953: INFO: namespace e2e-tests-var-expansion-slh9d deletion completed in 6.111125141s • [SLOW TEST:10.301 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:29:32.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 22 12:29:33.051: INFO: Waiting up to 5m0s for pod "pod-e3f473dd-9c27-11ea-8e9c-0242ac110018" in namespace "e2e-tests-emptydir-sf925" to be "success or failure" May 22 12:29:33.067: INFO: Pod "pod-e3f473dd-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.961844ms May 22 12:29:35.071: INFO: Pod "pod-e3f473dd-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019997367s May 22 12:29:37.075: INFO: Pod "pod-e3f473dd-9c27-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024138859s STEP: Saw pod success May 22 12:29:37.075: INFO: Pod "pod-e3f473dd-9c27-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:29:37.078: INFO: Trying to get logs from node hunter-worker pod pod-e3f473dd-9c27-11ea-8e9c-0242ac110018 container test-container: STEP: delete the pod May 22 12:29:37.091: INFO: Waiting for pod pod-e3f473dd-9c27-11ea-8e9c-0242ac110018 to disappear May 22 12:29:37.096: INFO: Pod pod-e3f473dd-9c27-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:29:37.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sf925" for this suite. May 22 12:29:43.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:29:43.134: INFO: namespace: e2e-tests-emptydir-sf925, resource: bindings, ignored listing per whitelist May 22 12:29:43.205: INFO: namespace e2e-tests-emptydir-sf925 deletion completed in 6.104975924s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:29:43.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-ea0fffc6-9c27-11ea-8e9c-0242ac110018 STEP: Creating a pod to test consume secrets May 22 12:29:43.303: INFO: Waiting up to 5m0s for pod "pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018" in namespace "e2e-tests-secrets-f7m9v" to be "success or failure" May 22 12:29:43.306: INFO: Pod "pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.493224ms May 22 12:29:45.310: INFO: Pod "pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007591754s May 22 12:29:47.356: INFO: Pod "pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05300262s STEP: Saw pod success May 22 12:29:47.356: INFO: Pod "pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018" satisfied condition "success or failure" May 22 12:29:47.359: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018 container secret-volume-test: STEP: delete the pod May 22 12:29:47.383: INFO: Waiting for pod pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018 to disappear May 22 12:29:47.388: INFO: Pod pod-secrets-ea11adb6-9c27-11ea-8e9c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:29:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-f7m9v" for this suite. May 22 12:29:53.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:29:53.437: INFO: namespace: e2e-tests-secrets-f7m9v, resource: bindings, ignored listing per whitelist May 22 12:29:53.471: INFO: namespace e2e-tests-secrets-f7m9v deletion completed in 6.080002425s • [SLOW TEST:10.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:29:53.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 22 12:29:57.592: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f02fabf1-9c27-11ea-8e9c-0242ac110018,GenerateName:,Namespace:e2e-tests-events-qpt78,SelfLink:/api/v1/namespaces/e2e-tests-events-qpt78/pods/send-events-f02fabf1-9c27-11ea-8e9c-0242ac110018,UID:f0301a6e-9c27-11ea-99e8-0242ac110002,ResourceVersion:11929027,Generation:0,CreationTimestamp:2020-05-22 12:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 553514236,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gdg74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdg74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-gdg74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7e440} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7e460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 12:29:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.58,StartTime:2020-05-22 12:29:53 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-22 12:29:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://cd6d37b63ce95f6330e0d56872ff818d18a31766ea25ecb3cbb1e45fc4bbaaff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 22 12:29:59.597: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 22 12:30:01.600: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:30:01.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-qpt78" for this suite. May 22 12:30:39.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:30:39.725: INFO: namespace: e2e-tests-events-qpt78, resource: bindings, ignored listing per whitelist May 22 12:30:39.759: INFO: namespace e2e-tests-events-qpt78 deletion completed in 38.094862181s • [SLOW TEST:46.287 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 22 12:30:39.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 22 12:30:47.961: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:47.966: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:30:49.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:49.971: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:30:51.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:51.973: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:30:53.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:53.969: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:30:55.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:55.973: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:30:57.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:58.015: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:30:59.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:30:59.971: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:31:01.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:31:01.971: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:31:03.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:31:03.971: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:31:05.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:31:05.972: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:31:07.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:31:07.970: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:31:09.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:31:09.971: INFO: Pod pod-with-prestop-exec-hook still exists May 22 12:31:11.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 12:31:11.971: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 22 12:31:11.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-drmjp" for this suite. May 22 12:31:33.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:31:34.049: INFO: namespace: e2e-tests-container-lifecycle-hook-drmjp, resource: bindings, ignored listing per whitelist May 22 12:31:34.072: INFO: namespace e2e-tests-container-lifecycle-hook-drmjp deletion completed in 22.088105094s • [SLOW TEST:54.313 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSMay 22 12:31:34.073: INFO: Running AfterSuite actions on all nodes May 22 12:31:34.073: INFO: Running AfterSuite actions on node 1 May 22 12:31:34.073: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6279.073 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS